Basic Raytracing 基本的光线追踪

In this chapter, we’ll introduce raytracing, the first major algorithm we’ll cover. We start by motivating the algorithm and laying out some basic pseudocode. Then we look at how to represent rays of light and objects in a scene. Finally, we derive a way to compute which rays of light make up the visible image of each of the objects in our scene and see how we can represent them on the canvas.
在这一章中,我们将介绍光线追踪,这是我们要讨论的第一个主要算法。我们首先介绍了该算法的动机,并布置了一些基本的伪代码。然后,我们研究如何表示光线和场景中的物体。最后,我们推导出一种方法来计算哪些光线构成了我们场景中每个物体的可见图像,并看看我们如何在画布上表示它们。

Rendering a Swiss Landscape
渲染瑞士景观

Suppose you’re visiting some exotic place and come across a stunning landscape—so stunning, you just need to make a painting capturing its beauty. Figure 2-1 shows one such landscape.
假设你正在访问某个异国他乡,遇到了令人惊叹的风景--如此令人惊叹,你只需要画一幅画来捕捉它的美丽。图2-1显示了一个这样的景观。

Figure 2-1: A breathtaking Swiss landscape

You have a canvas and a paint brush, but you absolutely lack artistic talent. Is all hope lost?
你有一块画布和一支画笔,但你绝对缺乏艺术天赋。所有的希望都破灭了吗?

Not necessarily. You may not have artistic talent, but you are methodical. So you do the most obvious thing: you get an insect net. You cut a rectangular piece, frame it, and fix it to a stick. Now you can look at the landscape through a netted window. Next, you choose the best point of view to appreciate this landscape and plant another stick to mark the exact position where your eye should be.
不一定。你可能没有艺术天赋,但你是有方法的。所以你做了最明显的事情:你得到一个昆虫网。你剪下一个长方形的片子,把它框起来,然后固定在一根棍子上。现在你可以通过一个网状的窗户看风景了。接下来,你选择欣赏这一景观的最佳视角,并在另一根棍子上标出你眼睛应该在的确切位置。

You haven’t started the painting yet, but now you have a fixed point of view and a fixed frame through which you can see the landscape. Moreover, this fixed frame is divided into small squares by the insect net. Now comes the methodical part. You draw a grid on the canvas, giving it the same number of squares as the insect net. Then you look at the top-left square of the net. What’s the predominant color you can see through it? Sky blue. So you paint the top-left square of the canvas sky blue. You do this for every square, and soon enough the canvas contains a pretty good painting of the landscape, as seen through the frame. The resulting painting is shown in Figure 2-2.
你还没有开始画画,但现在你有一个固定的视角和一个固定的框架,你可以通过它看到风景。此外,这个固定的框架被虫网分割成小方块。现在是有条不紊的部分。你在画布上画一个网格,使其与昆虫网的方格数相同。然后你看一下网的左上方的方格。你能透过它看到的主要颜色是什么?天蓝色。所以你把画布的左上角的方块涂成天蓝色。你对每一个方格都这样做,很快,画布上就有了一幅相当不错的风景画,透过画框可以看到。图2-2显示了这幅画的结果。

Figure 2-2: A crude approximation of the landscape

When you think about it, a computer is essentially a very methodical machine absolutely lacking artistic talent. We can describe the process of creating our painting as follows:
仔细想想,电脑本质上是一个非常有方法的机器,绝对缺乏艺术天赋。我们可以把我们的绘画创作过程描述如下。

For each little square on the canvas
    Paint it the right color

Easy! However, that formulation is too abstract to implement directly on a computer. We can go into a bit more detail:
很简单!然而,这种提法太抽象了,无法在计算机上直接实现。我们可以进入更多的细节。

Place the eye and the frame as desired
For each square on the canvas
    Determine which square on the grid corresponds to this square on the canvas
    Determine the color seen through that grid square
    Paint the square with that color

This is still too abstract, but it starts to look like an algorithm—and perhaps surprisingly, that’s a high-level overview of the full raytracing algorithm! Yes, it’s that simple.
这仍然太抽象了,但它开始看起来像一个算法--也许令人惊讶的是,这就是完整的光线追踪算法的一个高级概述!是的,就是这么简单。是的,就是这么简单。

Basic Assumptions 基本假设

Part of the charm of computer graphics is drawing things on the screen. To achieve this as soon as possible, we’ll make some simplifying assumptions. Of course, these assumptions impose some restrictions over what we can do, but we’ll lift the restrictions in later chapters.
计算机图形学的部分魅力在于在屏幕上画东西。为了尽快实现这一点,我们将做一些简化的假设。当然,这些假设对我们能做的事情施加了一些限制,但我们将在后面的章节中解除这些限制。

First of all, we’ll assume a fixed viewing position. The viewing position, the place where you’d put your eye in the Swiss landscape analogy, is commonly called the camera position; let’s call it O. We’ll assume that the camera occupies a single point in space, that it is located at the origin of the coordinate system, and that it never moves from there, so O=(0,0,0) for now.
首先,我们将假设一个固定的观看位置。观察位置,也就是你在瑞士景观比喻中把眼睛放在的地方,通常被称为相机位置;我们把它称为O𝑂。我们假设摄像机在空间中只占一个点,它位于坐标系的原点,并且永远不会从那里移动,所以现在O=(0,0,0) 𝑂 = ( 0 , 0 , 0 ) 。

Second, we’ll assume a fixed camera orientation. The camera orientation determines where the camera is pointing. We’ll assume it looks in the direction of the positive Z axis (which we’ll shorten to Z+), with the positive Y axis (Y+) up and the positive X axis (X+) to the right (Figure 2-3).
第二,我们将假设一个固定的摄像机方向。摄像机的方向决定了摄像机的指向。我们假设它朝正Z𝑍轴(我们将其简称为Z + → 𝑍 + →)的方向看,正Y轴(Y + → 𝑌 + →)向上,正X轴(X + → 𝑋 + →)向右(图 2-3)。

Figure 2-3: The position and orientation of the camera

The camera position and orientation are now fixed. Still missing from the analogy is the “frame” through which we look at the scene. We’ll assume this frame has dimensions Vw and Vh, and is frontal to the camera orientation—that is, perpendicular to Z+. We’ll also assume it’s at a distance d, its sides are parallel to the X and Y axes, and it’s centered with respect to Z. That’s a mouthful, but it’s actually quite simple. Take a look at Figure 2-4.
摄像机的位置和方向现在是固定的。在这个类比中还缺少一个 "框架",我们通过它来观察这个场景。我们假设这个框架的尺寸为V w 𝑉 𝑤和V h 𝑉 ℎ,并且与摄像机的方向成正交,也就是说,与Z + → 𝑍 + →垂直。我们还假设它的距离是d𝑑,它的两侧平行于X𝑋和Y𝑌轴,并且它相对于Z⃗𝑍→居中。这句话很拗口,但其实很简单。请看图2-4。

The rectangle that will act as our window to the world is called the viewport. Essentially, we’ll draw on the canvas whatever we see through the viewport. Note that the size of the viewport and the distance to the camera determine the angle visible from the camera, called the field of view, or FOV for short. Humans have an almost 180 horizontal FOV (although much of it is blurry peripheral vision with no sense of depth). For simplicity, we’ll set Vw=Vh=d=1; this results in a FOV of approximately 53, which produces reasonable-looking images that are not overly distorted.
作为我们观察世界的窗口的矩形被称为视口。基本上,我们将在画布上画出我们通过视口看到的任何东西。请注意,视口的大小和与相机的距离决定了从相机上看到的角度,这叫做视场,简称FOV。人类有一个几乎是180∘的水平视场(尽管其中大部分是模糊的周边视觉,没有深度感)。为了简单起见,我们设定V w = V h =d=1 𝑉 𝑤 = 𝑉 ℎ = 𝑑 = 1;这样一来,FOV大约为53∘,这样的图像看起来合理,不会过度失真。

Figure 2-4: The position and orientation of the viewport

Let’s go back to the “algorithm” presented earlier, use the appropriate technical terms, and number the steps in Listing 2-1:
让我们回到前面介绍的 "算法",使用适当的技术术语,并对清单2-1中的步骤进行编号。

❶Place the camera and the viewport as desired
For each pixel on the canvas
    ❷Determine which square on the viewport corresponds to this pixel
    ❸Determine the color seen through that square
    ❹Paint the pixel with that color
Listing 2-1: A high-level description of our raytracing algorithm
清单2-1。我们的光线追踪算法的高级描述

We have just done step ❶ (or, more precisely, gotten it out of the way for now). Step ❹ is trivial: we simply use canvas.PutPixel(x, y, color). Let’s do step ❷ quickly, and then focus our attention on increasingly sophisticated ways of doing step ❸ over the next few chapters.
我们刚刚完成了步骤❶(或者更准确地说,暂时把它弄出来了)。步骤❹是微不足道的:我们只需使用 canvas.PutPixel(x, y, color) 。让我们快速完成第❷步,然后在接下来的几章里把注意力集中到越来越复杂的第❸步的方法上。

Canvas to Viewport 画布到视口

Step ❷ of our algorithm in Listing 2-1 asks us to Determine which square on the viewport corresponds to this pixel. We know the canvas coordinates of the pixel—let’s call them Cx and Cy. Notice how we conveniently placed the viewport so that its axes match the orientation of those of the canvas, and its center matches the center of the canvas. Because the viewport is measured in world units and the canvas is measured in pixels, going from canvas coordinates to space coordinates is just a change of scale!
清单2-1中我们算法的第❷步要求我们 Determine which square on the viewport corresponds to this pixel 。我们知道像素的画布坐标--让我们称它们为 C x 𝐶 𝑥 和 C y 𝐶 𝑦。注意我们是如何方便地放置视口,使其轴线与画布的方向一致,其中心也与画布的中心一致。因为视口是以世界单位来衡量的,而画布是以像素来衡量的,所以从画布坐标到空间坐标只是一个比例的变化而已

Vx=CxVwCw
V x = C x ⋅ V w C w 𝑉 𝑥 = 𝑉 𝑤 𝑤 𝑤 𝑤

Vy=CyVhCh
V y = C y ⋅ V h C h 𝑉 𝑦 = 𝐶 𝑦 ⋅ ℎ ℎ

There’s an extra detail. Although the viewport is 2D, it’s embedded in 3D space. We defined it to be at a distance d from the camera; every point in this plane (called the projection plane) has, by definition, z=d. Therefore,
还有一个额外的细节。虽然视口是二维的,但它是嵌入在三维空间中的。我们把它定义为与摄像机的距离为d𝑑;根据定义,这个平面(称为投影平面)上的每一个点都有z=d𝑧=𝑑。因此。

Vz=d

And we’re done with this step. For each pixel (Cx,Cy) on the canvas, we can determine its corresponding point on the viewport (Vx,Vy,Vz).
我们就完成了这一步。对于画布上的每个像素 ( C x , C y ) ( 𝐶 𝑥 , 𝐶 𝑦 ) ,我们可以确定其在视口 ( V x , V y , V z ) ( 𝑉 𝑥 , 𝑉 𝑦 , 𝑉 𝑧 ) 的相应点。

Tracing Rays 追踪射线

The next step is to figure out what color the light coming through (Vx,Vy,Vz) is, as seen from the camera’s point of view (Ox,Oy,Oz).
下一步是要弄清楚,从相机的角度看,通过( V x , V y , V z ) ( 𝑉 𝑥 , 𝑉 𝑦 , 𝑉 𝑧 ) 的光线是什么颜色。

In the real world, light comes from a light source (the Sun, a light bulb, and so on), bounces off several objects, and then finally reaches our eyes. We could try simulating the path of every photon leaving our simulated light sources, but it would be extremely time-consuming. Not only would we have to simulate a mind-boggling number of photons (a single 100 W light bulb emits 1020 photons per second!), only a tiny minority of them would happen to reach (Ox,Oy,Oz) after coming through the viewport. This technique is called photon tracing or photon mapping; unfortunately, it’s outside the scope of this book.
在现实世界中,光线来自一个光源(太阳、灯泡等等),经过几个物体的反弹,最后到达我们的眼睛。我们可以尝试模拟每一个光子离开我们模拟的光源的路径,但这将是非常耗时的。我们不仅要模拟数量惊人的光子(一个100W的灯泡每秒发出10 20 10 20个光子!),而且其中只有极少数的光子在通过视口后碰巧到达( O x , O y , O z ) ( 𝑂 𝑥 , 𝑂 𝑦 , 𝑂 𝑧 ) 。这种技术被称为光子追踪或光子映射;不幸的是,它超出了本书的范围。

Instead, we’ll consider the rays of light “in reverse”; we’ll start with a ray originating from the camera, going through a point in the viewport, and tracing its path until it hits some object in the scene. This object is what the camera “sees” through that point of the viewport. So, as a first approximation, we’ll just take the color of that object as “the color of the light coming through that point,” as shown in Figure 2-5.
相反,我们将 "反向 "考虑光线;我们将从一条来自摄像机的光线开始,穿过视口中的一个点,并追踪其路径,直到它击中场景中的某个物体。这个物体就是摄像机通过视口的那个点 "看到 "的东西。因此,作为第一种近似方法,我们将该物体的颜色视为 "通过该点的光线的颜色",如图2-5所示。

Figure 2-5: A tiny square in the viewport, representing a single pixel in the canvas, painted with the color of the object the camera sees through it

Now we just need some equations.
现在我们只需要一些方程式。

The Ray Equation 雷方程式

The most convenient way to represent a ray for our purposes is with a parametric equation. We know the ray passes through O, and we know its direction (from O to V), so we can express any point P in the ray as
为了我们的目的,表示射线的最方便的方法是用参数方程。我们知道射线穿过O𝑂,我们知道它的方向(从O𝑂到V𝑉),所以我们可以将射线中的任何一点P表示为

P=O+t(VO)
P=O+t(V-O) 𝑃 = 𝑂 + 𝑡 ( 𝑉 - 𝑂 )

where t is any real number. By plugging every value of t from to + into this equation, we get every point P along the ray.
其中t 𝑡是任何实数。将t 𝑡从-∞-∞到+∞+∞的每一个值插入这个方程中,我们就可以得到沿射线的每一个点P𝑃。

Let’s call (VO), the direction of the ray, D. The equation becomes
我们把(V-O)( 𝑉 - 𝑂),即射线的方向,D ⃗ 𝐷 →称为。该方程成为

P=O+tD
P=O+t D ⃗ 𝑃 = 𝑂 + 𝑡 𝐷 →。

An intuitive way to understand this equation is that we start the ray at the origin (O) and “advance” along the direction of the ray (D) by some amount (t); it’s easy to see that this includes all the points along the ray. You can read more details about these vector operations in the Linear Algebra appendix. Figure 2-6 shows our equation in action.
理解这个方程的直观方法是,我们从原点(O𝑂)开始射线,沿着射线的方向(D⃗ 𝐷 →)"前进 "一定量(t 𝑡);很容易看出,这包括沿射线的所有点。你可以在线性代数附录中阅读关于这些矢量运算的更多细节。图2-6显示了我们的方程式的运行情况。

Figure 2-6: Some points of the ray O + t\vec{\mathsf{D\}\} for different values of t.

Figure 2-6 shows the points along the ray that corresponds to t=0.5 and t=1.0. Every value of t yields a different point along the ray.
图2-6显示了t=0.5 𝑡=0.5和t=1.0 𝑡=1.0所对应的沿射线的点。t𝑡的每一个值都会产生一个沿射线的不同点。

The Sphere Equation 球体方程

Now we need to have some sort of object in the scene, so that our rays can hit something. We could choose any arbitrary geometric primitive as the building block of our scenes; for raytracing, we’ll use spheres because they’re easy to manipulate with equations.
现在,我们需要在场景中拥有某种物体,以便我们的光线能够击中某些东西。我们可以选择任何任意的几何基元作为我们场景的构件;对于光线追踪,我们将使用球体,因为它们很容易用方程进行操作。

What is a sphere? A sphere is the set of points that lie at a fixed distance from a fixed point. That distance is called the radius of the sphere, and the point is called the center of the sphere. Figure 2-7 shows a sphere, defined by its center C and its radius r.
什么是球体?球体是指与一个固定点有固定距离的点的集合。这个距离被称为球体的半径,而这个点被称为球体的中心。图2-7是一个球体,由其中心C𝐶和半径r𝑟定义。

Figure 2-7: A sphere, defined by its center and its radius

According to our definition above, if C is the center and r is the radius of a sphere, the points P on the surface of that sphere must satisfy the following equation:
根据我们上面的定义,如果C𝐶是中心,r𝑟是球体的半径,那么该球体表面上的点P𝑃必须满足以下公式。

distance(P,C)=r
distance(P,C)=r 𝑑 𝑖 𝑠 𝑡 𝑐 𝑒 ( 𝑃 , 𝐶 ) = 𝑟

Let’s play a bit with this equation. If you find any of this math unfamiliar, read through the Linear Algebra appendix.
我们来玩一下这个方程。如果你觉得这些数学知识不熟悉,请通读线性代数附录。

The distance between P and C is the length of the vector from P to C:
P𝑃和C𝐶之间的距离是指从P𝑃到C𝐶的矢量长度。

|PC|=r
|P-C|=r | 𝑃 - 𝐶 | = 𝑟

The length of a vector (denoted |V|) is the square root of its dot product with itself (denoted V,V):
一个向量的长度(表示| V ⃗ | | 𝑉 → |)是其与自身点积的平方根(表示 ⟨ V ⃗ ,V ⃗ ⟩ → ,𝑉 → ⟩)。

PC,PC=r
⟨P-C,P-C⟩ - - - - - - - √ =r ⟨ 𝑃 - 𝐶 , 𝑃 - ⟩ =𝑟

To get rid of the square root, we can square both sides:
为了摆脱平方根,我们可以对两边进行平方。

PC,PC=r2
⟨P-C,P-C⟩= r 2 ⟨ 𝑃 - 𝐶 , 𝑃 - 𝐶 ⟩ = 𝑟 2

All these formulations of the sphere equation are equivalent, but this last one is the most convenient to manipulate in the following steps.
所有这些球体方程的表述都是等价的,但最后一种是最方便在下面的步骤中操作的。

Ray Meets Sphere 雷霆遇上球体

We now have two equations: one describing the points on the sphere, and one describing the points on the ray:
我们现在有两个方程:一个描述球体上的点,一个描述射线上的点。

PC,PC=r2
⟨P-C,P-C⟩= r 2 ⟨ 𝑃 - 𝐶 , 𝑃 - 𝐶 ⟩ = 𝑟 2

P=O+tD
P=O+t D ⃗ 𝑃 = 𝑂 + 𝑡 𝐷 →。

Do the ray and the sphere intersect? If so, where?
射线和球体是否相交?如果是的话,在哪里?

Suppose the ray and the sphere do intersect at a point P. This point is both along the ray and on the surface of the sphere, so it must satisfy both equations at the same time. Note that the only variable in these equations is the parameter t, since O, D, C, and r are given and P is the point we’re trying to find.
假设射线和球体确实相交于一个点P𝑃。该点既沿射线又在球面上,所以它必须同时满足两个方程。请注意,这些方程中唯一的变量是参数t 𝑡,因为O 𝑂、D ⃗ 𝐷 →、C 𝐶和r 𝑟已经给定,P 𝑃是我们要找的点。

Since P represents the same point in both equations, we can substitute P in the first one with the expression for P in the second. This gives us
由于P𝑃在两个方程中代表同一个点,我们可以用第二个方程中的P𝑃的表达式来替代第一个方程中的P𝑃。这样我们就可以得到

O+tDC,O+tDC=r2
⟨O+t D ⃗ -C,O+t D ⃗ -C⟩= r 2 ⟨ 𝑂 + 𝑡 𝐷 → - 𝐶 , 𝑂 + 𝑡 𝐷 → - 𝐶⟩ = 𝑟 2

If we can find values of t that satisfy this equation, we can put them in the ray equation to find the points where the ray intersects the sphere.
如果我们能找到满足这个方程的t𝑡值,我们就可以把它们放在射线方程中,以找到射线与球体的相交点。

In its current form, the equation is somewhat unwieldy. Let’s do some algebraic manipulation to see what we can get out of it.
在目前的形式下,这个方程有点不容易操作。让我们做一些代数操作,看看我们能从中得到什么。

First, let CO=OC. Then we can write the equation as
首先,让CO → =O-C 𝐶 𝑂 → = 𝑂 - 𝐶。然后我们可以将方程写成

CO+tD,CO+tD=r2
⟨ CO → +t D ⃗ , CO → +t D ⃗ ⟩= r 2 ⟨ 𝐶 𝑂 → + 𝑡 𝐷 → , 𝐶 𝑂 → + 𝑡 𝐷 → ⟩ = 𝑟 2

Then we expand the dot product into its components, using its distributive properties (again, feel free to consult the Linear Algebra appendix):
然后我们利用点积的分布特性,将点积扩展为它的组成部分(同样,请随时查阅线性代数附录)。

CO+tD,CO+CO+tD,tD=r2
⟨ CO → +t D ⃗ , CO → ⟩+⟨ CO → +t D ⃗ , t D ⃗ ⟩ = r 2 ⟨ 𝐶𝑂 → + 𝑡 𝐷 → , 𝐶 𝑂 → ⟩ + ⟨ 𝐶 𝑂 → + 𝑡 𝐷 →, 𝐷 → ⟩ = 𝑟 2

CO,CO+tD,CO+CO,tD+tD,tD=r2
⟨ CO → , CO → ⟩+⟨t D ⃗ , CO → ⟩+⟨ CO → ,t D ⃗ ⟩+⟨t D ⃗ ,t D & ⟩= r 2 ⟨ 𝐶 𝑂 → , 𝐶 𝑂 → ⟩ + ⟨ 𝑡 𝐷 → , 𝐶 𝑂 → ⟩ + ⟨ ∙ 𝑂 → , 𝑡 𝐷 → ⟩ + ⟨ 𝐷 → , 𝑡 ⟩ =𝑟 2

Rearranging the terms a bit, we get
将条款重新排列一下,我们可以得到

tD,tD+2CO,tD+CO,CO=r2
⟨t D ⃗ ,t D ⃗ ⟩+2⟨ CO → ,t D ⃗ ⟩+⟨ CO → , CO → ⟩= r 2 ⟨ 𝑡 𝐷 → , 𝑡 𝐷 → ⟩ + 2 ⟨ 𝐶 𝑂 → , 𝑡 𝐷 → ⟩ + ⟨ 𝐶 𝑂 → , 𝐶 𝑂 → ⟩ =𝑟 2

Moving the parameter t out of the dot products and moving r2 to the other side of the equation gives us
将参数t 𝑡从点乘中移出,并将r 2 𝑟 2移到方程的另一边,就可以得到

t2D,D+t(2CO,D)+CO,COr2=0
t 2 ⟨ D ⃗ , D ⃗ ⟩+t(2⟨ CO → , D ⃗ ⟩)+⟨ CO → , CO → ⟩- r 2 =0 𝑡 2 ⟨ 𝐷 → , 𝐷 → ⟩ + 𝑡 ( 2 ⟨ 𝐶 𝑂 → , 𝐷 → ⟩ ) + ⟨ 𝐶 𝑂 → , 𝐶 𝑂 → ⟩ - 𝑟 2 =0

Remember that the dot product of two vectors is a real number, so every term between angle brackets is a real number. If we give them names, we’ll get something much more familiar:
记住,两个向量的点积是一个实数,所以角括号之间的每一个项都是一个实数。如果我们给它们命名,我们会得到更熟悉的东西。

a=D,D
a=⟨ D ⃗ , D ⃗ ⟩ 𝑎 = ⟨ 𝐷 → , 𝐷 → ⟩ 。

b=2CO,D
b=2⟨ CO → ,D ⃗ ⟩ 𝑏 = 2 ⟨ 𝐶 𝑂 → ,𝐷 → ⟩。

c=CO,COr2
c=⟨ CO → , CO → ⟩- r 2 𝑐 = ⟨ 𝐶 𝑂 → , 𝑂 → ⟩ - 𝑟 2

at2+bt+c=0
a t 2 +bt+c=0 𝑎 𝑡 2 +𝑏 𝑡 + 𝑐 = 0

This is nothing more and nothing less than a good old quadratic equation! Its solutions are the values of the parameter t where the ray intersects the sphere:
这无非是一个古老的二次方程!它的解是射线与球面相交处的参数t 𝑡的值。它的解就是射线与球体相交的参数t𝑡的值。

{t1,t2}=b±b24ac2a
{ t 1 , t 2 }= -b± b 2 -4ac - - - √ 2a { 𝑡 1 , 𝑡 2 }= - 𝑏 ± 𝑏 2 - 4 𝑎 𝑐 2 𝑎

Fortunately, this makes geometrical sense. As you may remember, a quadratic equation can have no solutions, one double solution, or two different solutions, depending on the value of the discriminant b24ac. This corresponds exactly to the cases where the ray doesn’t intersect the sphere, the ray is tangent to the sphere, and the ray enters and exits the sphere, respectively (Figure 2-8).
幸运的是,这在几何学上是有意义的。你可能还记得,一个二次方程可以没有解,可以有一个双解,也可以有两个不同的解,这取决于判别式b 2 -4ac 𝑏 2 -4 𝑎 𝑐 的值。这正好对应于射线不与球体相交、射线与球体相切、射线进入和离开球体的情况,分别是(图2-8)。

Figure 2-8: The geometrical interpretation of the solutions to the quadratic equation: no solutions, one solution, or two solutions.

Once we have found the value of t, we can plug it back into the ray equation, and we finally get the intersection point P corresponding to that value of t.
一旦我们找到了t𝑡的值,我们就可以把它插回射线方程中,最后得到t𝑡的那个值所对应的交点P𝑃。

Rendering our First Spheres
渲染我们的第一个球体

To recap, for each pixel on the canvas, we can compute the corresponding point on the viewport. Given the position of the camera, we can express the equation of a ray that starts at the camera and goes through that point of the viewport. Given a sphere, we can compute where the ray intersects that sphere.
简而言之,对于画布上的每个像素,我们可以计算出视口上的相应点。给定相机的位置,我们可以表达一条射线的方程,它从相机开始,穿过视口的那个点。给定一个球体,我们可以计算射线与该球体相交的位置。

So all we need to do is to compute the intersections of the ray and each sphere, keep the intersection closest to the camera, and paint the pixel on the canvas with the appropriate color. We’re almost ready to render our first spheres!
因此,我们需要做的就是计算射线和每个球体的交点,保留最靠近摄像机的交点,并在画布上用适当的颜色绘制像素。我们几乎已经准备好渲染我们的第一个球体了!

The parameter t deserves some extra attention, though. Let’s go back to the ray equation:
不过,参数t 𝑡值得额外关注。我们再来看看射线方程。

P=O+t(VO)
P=O+t(V-O) 𝑃 = 𝑂 + 𝑡 ( 𝑉 - 𝑂 )

Since the origin and direction of the ray are fixed, varying t across all the real numbers will yield every point P in this ray. Note that for t=0 we get P=O, and for t=1 we get P=V. Negative values of t yield points in the opposite direction—that is, behind the camera. So, we can divide the parameter space into three parts, as in Table 2-1. Figure 2-9 shows a diagram of the parameter space.
由于射线的原点和方向是固定的,在所有实数上改变t𝑡将得到这条射线上的每个点P𝑃。注意,对于t=0 𝑡 = 0,我们得到P=O 𝑃 = 𝑂,对于t=1 𝑡 = 1,我们得到P=V 𝑃 = 𝑉。t𝑡的负值产生的点在相反的方向--也就是在相机的后面。因此,我们可以将参数空间分为三部分,如表2-1所示。图2-9是参数空间的示意图。

Table 2-1: Subdivisions of the Parameter Space
表2-1:参数空间的划分
t < 0 Behind the camera 镜头背后
0 ≤ t ≤ 1 Between the camera and the projection plane/viewport
在摄像机和投影平面/视口之间
t > 1 In front of the projection plane/viewport
在投影平面/视口的前面
Figure 2-9: A few points in parameter space

Note that nothing in the intersection equation says that the sphere has to be in front of the camera; the equation will happily produce solutions for intersections behind the camera. Obviously, this isn’t what we want, so we should ignore any solutions with t<0. To avoid further mathematical unpleasantness, we’ll restrict the solutions to t>1; that is, we’ll render whatever is beyond the projection plane.
请注意,相交方程中并没有说球体必须在摄像机的前面;方程会很高兴地产生摄像机后面的相交的解。显然,这不是我们想要的,所以我们应该忽略任何t<0 𝑡<0的解。为了避免进一步的数学上的不愉快,我们将把解限制在t>1 𝑡>1;也就是说,我们将渲染投影面以外的任何东西。

On the other hand, we don’t want to put an upper bound on the value of t; we want to see all objects in front of the camera, no matter how far away they are. However, because in later stages we will want to cut rays short, we’ll introduce this formalism now and give t an upper value of + (for languages that can’t represent “infinity” directly, a really really big number does the trick).
另一方面,我们不想给t 𝑡的值设置上限;我们想看到摄像机前面的所有物体,不管它们有多远。然而,因为在后面的阶段,我们希望缩短射线,所以我们现在就引入这个形式主义,给t 𝑡一个+∞+∞的上限值(对于不能直接表示 "无穷大 "的语言,一个非常非常大的数字就可以了)。

We can now formalize everything we’ve done so far with some pseudocode. As a general rule, we’ll assume the code has access to whatever data it needs, so we won’t bother explicitly passing around parameters such as the canvas and will focus on the really necessary ones.
我们现在可以用一些伪代码来正式说明我们到目前为止所做的一切。一般来说,我们会假设代码可以访问它所需要的任何数据,所以我们不会麻烦地明确传递诸如画布之类的参数,而会专注于真正必要的参数。

The main method now looks like Listing 2-2.
主方法现在看起来像清单2-2。

O = (0, 0, 0)
for x = -Cw/2 to Cw/2 {
    for y = -Ch/2 to Ch/2 {
        D = CanvasToViewport(x, y)
        color = TraceRay(O, D, 1, inf)
        canvas.PutPixel(x, y, color)
    }
}
Listing 2-2: The main method 清单2-2:主方法

The CanvasToViewport function is very simple, and is shown in Listing 2-3. The constant d represents the distance between the camera and the projection plane.
CanvasToViewport 函数非常简单,如清单2-3所示。常数 d 代表摄像机和投影平面之间的距离。

CanvasToViewport(x, y) {
    return (x*Vw/Cw, y*Vh/Ch, d)
}
Listing 2-3: The CanvasToViewport function
清单2-3: CanvasToViewport 函数

The TraceRay method (Listing 2-4) computes the intersection of the ray with every sphere and returns the color of the sphere at the nearest intersection inside the requested range of t.
0#方法(清单2-4)计算射线与每个球体的交点,并返回t 𝑡请求范围内最近的交点的球体的颜色。

TraceRay(O, D, t_min, t_max) {
    closest_t = inf
    closest_sphere = NULL
    for sphere in scene.spheres {
        t1, t2 = IntersectRaySphere(O, D, sphere)
        if t1 in [t_min, t_max] and t1 < closest_t {
            closest_t = t1
            closest_sphere = sphere
        }
        if t2 in [t_min, t_max] and t2 < closest_t {
            closest_t = t2
            closest_sphere = sphere
        }
    }
    if closest_sphere == NULL {
       ❶return BACKGROUND_COLOR
    }
    return closest_sphere.color
}
Listing 2-4: The TraceRay method
清单2-4: TraceRay 方法

In Listing 2-4, O represents the origin of the ray; although we’re tracing rays from the camera, which is placed at the origin, this won’t necessarily be the case in later stages, so it has to be a parameter. The same applies to t_min and t_max.
在清单2-4中, O 代表射线的原点;尽管我们是在追踪来自摄像机的射线,而摄像机是放在原点的,但在后面的阶段不一定是这样的,所以它必须是一个参数。这同样适用于 t_mint_max

Note that when the ray doesn’t intersect any sphere, we still need to return some color ❶—I’ve chosen white in most of these examples.
请注意,当射线不与任何球体相交时,我们仍然需要返回一些颜色❶--我在这些例子中大多选择白色。

Finally, IntersectRaySphere (Listing 2-5) just solves the quadratic equation.
最后, IntersectRaySphere (清单2-5)只是解决了一元二次方程的问题。

IntersectRaySphere(O, D, sphere) {
    r = sphere.radius
    CO = O - sphere.center

    a = dot(D, D)
    b = 2*dot(CO, D)
    c = dot(CO, CO) - r*r

    discriminant = b*b - 4*a*c
    if discriminant < 0 {
        return inf, inf
    }

    t1 = (-b + sqrt(discriminant)) / (2*a)
    t2 = (-b - sqrt(discriminant)) / (2*a)
    return t1, t2
}
Listing 2-5: The IntersectRaySphere method
清单2-5。 IntersectRaySphere 方法

To put all of this into practice, let’s define a very simple scene, shown in Figure 2-10.
为了将所有这些付诸实践,让我们定义一个非常简单的场景,如图2-10所示。

Figure 2-10: A very simple scene, viewed from above (left) and from the right (right)

In pseudoscene language, it’s something like this:
用假的语言来说,是这样的。

viewport_size = 1 x 1
projection_plane_d = 1
sphere {
    center = (0, -1, 3)
    radius = 1
    color = (255, 0, 0)  # Red
}
sphere {
    center = (2, 0, 4)
    radius = 1
    color = (0, 0, 255)  # Blue
}
sphere {
    center = (-2, 0, 4)
    radius = 1
    color = (0, 255, 0)  # Green
}

When we run our algorithm on this scene, we’re finally rewarded with an incredibly awesome raytraced scene (Figure 2-11).
当我们在这个场景上运行我们的算法时,我们终于得到了一个令人难以置信的超强光线追踪场景(图2-11)。

Figure 2-11: An incredibly awesome raytraced scene

Source code and live demo >>
源代码和现场演示 >>

I know, it’s a bit of a letdown, isn’t it? Where are the reflections and the shadows and the polished look? Don’t worry, we’ll get there. This is a good first step. The spheres look like circles, which is better than if they looked like cats. The reason they don’t look quite like spheres is that we’re missing a key component of how human beings determine the shape of an object: the way it interacts with light. We’ll cover that in the next chapter.
我知道,这有点让人失望,不是吗?反射、阴影和抛光的外观在哪里?别担心,我们会做到的。这是一个很好的第一步。球体看起来像圆圈,这比它们看起来像猫要好。它们看起来不太像球体的原因是,我们缺少了人类如何确定物体形状的一个关键组成部分:它与光的互动方式。我们将在下一章中讨论这个问题。

Summary

In this chapter, we’ve laid down the foundations of our raytracer. We’ve chosen a fixed setup (the position and orientation of the camera and the viewport, as well as the size of the viewport); we’ve chosen representations for spheres and rays; we’ve explored the math necessary to figure out how spheres and rays interact; and we’ve put all this together to draw the spheres on the canvas using solid colors.
在本章中,我们已经奠定了光线追踪器的基础。我们选择了一个固定的设置(摄像机和视口的位置和方向,以及视口的大小);我们选择了球体和射线的表示方法;我们探索了必要的数学,以弄清球体和射线的相互作用;我们把所有这些放在一起,用纯色在画布上绘制球体。

The next chapters build on this by modeling the way the rays of light interact with objects in the scene in increasing detail.
接下来的章节在此基础上,对光线与场景中的物体相互作用的方式进行了越来越详细的建模。