Extending the Raytracer 扩展光线跟踪器
We’ll conclude the first part of the book with a quick discussion of several interesting topics that we haven’t yet covered: placing the camera anywhere in the scene, performance optimizations, primitives other than spheres, modeling objects using constructive solid geometry, supporting transparent surfaces, and supersampling. We won’t implement all of these changes, but I encourage you to give them a try! The preceding chapters, plus the descriptions offered below, give you solid foundations to explore and implement them by yourself.
在本书第一部分的最后,我们将快速讨论几个我们还没有涉及到的有趣的话题:将摄像机放在场景中的任何地方、性能优化、球体以外的基元、使用构造实体几何建模的对象、支持透明表面以及超采样。我们不会实现所有这些变化,但我鼓励你试一试!前面的章节,加上下面提供的描述,为你自己探索和实现它们提供了坚实的基础。
Arbitrary Camera Positioning 任意的相机定位
At the very beginning of the discussion about raytracing we made three important assumptions: that the camera was fixed at , that it was pointing to , and that its “up” direction was . In this section, we’ll lift these restrictions so we can put the camera anywhere in the scene and point it in any direction.
在讨论光线追踪的最开始,我们做了三个重要的假设:摄像机固定在(0,0,0) ( 0 , 0 , 0 ),它指向Z + → 𝑍 + →,它的 "向上 "方向是Y + → 𝑌 + →。在本节中,我们将取消这些限制,这样我们就可以把摄像机放在场景中的任何地方,并把它指向任何方向。
Let’s start with the camera position. You may have noticed that is used exactly once in all the pseudocode: as the origin of the rays coming from the camera in the top-level method. If we want to change the position of the camera, the only thing we need to do is to use a different value for and we’re done.
让我们从摄像机的位置开始。你可能已经注意到,O𝑂在所有的伪代码中只用了一次:作为顶层方法中来自摄像机的射线的原点。如果我们想改变摄像机的位置,我们唯一需要做的就是为O𝑂使用一个不同的值,然后我们就完成了。
Does the change in position affect the direction of the rays? Not at all. The direction of the rays is the vector that goes from the camera to the projection plane. When we move the camera, the projection plane moves together with it, so their relative positions don’t change. The way we have written CanvasToViewport is consistent with this idea.
位置的变化是否影响射线的方向?完全不会。射线的方向是指从摄像机到投影平面的矢量。当我们移动摄像机时,投影平面也随之移动,所以它们的相对位置不会改变。我们写 CanvasToViewport 的方式与这个想法是一致的。
Let’s turn our attention to the camera orientation. Suppose you have a rotation matrix that represents the desired orientation of the camera. The position of the camera doesn’t change if you just rotate the camera around, but the direction it’s looking toward does; it undergoes the same rotation as the whole camera. So if you have the ray direction and the rotation matrix , the rotated is just .
让我们把注意力转移到摄像机的方向上。假设你有一个旋转矩阵,代表摄像机的理想方向。如果你只是旋转摄像机,摄像机的位置不会改变,但它所看向的方向会改变;它与整个摄像机经历同样的旋转。因此,如果你有射线方向D ⃗ 𝐷 → 和旋转矩阵R 𝑅,旋转后的D 𝐷只是R⋅ D ⃗ ⋅ 𝑅 →。
In summary, the only function that needs to change is the main function we wrote back in Listing 2-2. Listing 5-1 shows the updated function:
综上所述,唯一需要改变的函数是我们在清单2-2中写的主函数。清单5-1显示了更新后的函数。
for x in [-Cw/2, Cw/2] {
for y in [-Ch/2, Ch/2] {
❶D = camera.rotation * CanvasToViewport(x, y)
❷color = TraceRay(camera.position, D, 1, inf)
canvas.PutPixel(x, y, color)
}
}清单5-1:主循环,经过更新以支持任意的相机位置和方向
We apply the camera’s rotation matrix ❶, which describes its orientation in space, to the direction of the ray we’re about to trace. Then we use the camera position as the starting point of the ray ❷.
我们将描述摄影机在空间的方向的旋转矩阵❶,应用到我们要追踪的光线方向。然后我们用摄影机的位置作为射线❷的起点。
Figure 5-1 shows what our scene looks like when rendered from a different position and with a different camera orientation.
图5-1显示了我们的场景从不同的位置和不同的相机方向进行渲染时的样子。
Source code and live demo >>
源代码和现场演示 >>
Performance Optimizations 性能优化
The preceding chapters focused on the clearest possible way to explain and implement the different features of a raytracer. As a result, it is fully functional but not particularly fast. Here are some ideas you can explore by yourself to make the raytracer faster. Just for fun, measure before-and-after times for each of these. You’ll be surprised by the results!
前面几章的重点是以最清晰的方式来解释和实现光线跟踪器的不同功能。结果是,它功能齐全,但速度不是特别快。这里有一些你可以自己探索的想法,以使光线追踪器更快。只是为了好玩,测量一下这些东西的前后时间。你会对结果感到惊讶的。
Parallelization 并行化
The most obvious way to make a raytracer faster is to trace more than one ray at a time. Since each ray leaving the camera is independent of every other ray and the scene data is read-only, you can trace one ray per CPU core without many penalties or much synchronization complexity. In fact, raytracers belong to a class of algorithms called embarrassingly parallelizable, precisely because their very nature makes them extremely easy to parallelize.
让光线追踪器更快的最明显的方法是一次追踪一条以上的光线。由于离开相机的每条光线都是独立于其他光线的,而且场景数据是只读的,所以你可以在每个CPU核上追踪一条光线,而不会有很多惩罚或很多同步复杂性。事实上,光线追踪器属于一类被称为 "令人尴尬的可并行化 "的算法,正是因为它们的性质使得它们非常容易被并行化。
Spawning a thread per ray is probably not a good idea, though; the overhead of managing potentially millions of threads would probably negate the speed-up you’d obtain. A more sensible idea would be to create a set of “tasks,” each of them responsible for raytracing a section of the canvas (a rectangular area, down to a single pixel), and dispatch them to worker threads running on the physical cores as they become available.
不过,为每条光线生成一个线程可能不是一个好主意;管理潜在的数百万个线程的开销可能会抵消你所获得的速度提升。一个更明智的想法是创建一组 "任务",每个任务负责对画布的一个部分(一个矩形区域,小到一个像素)进行光线追踪,并在物理核心上运行的工人线程可用时将其分派出去。
Caching Immutable Values 缓存不可变的值
Caching is a way to avoid repeating the same computation over and over again. Whenever there’s an expensive computation and you expect to use the result of this computation repeatedly, it might be a good idea to store (cache) this result and just reuse it next time it’s needed, especially if this value doesn’t change often.
缓存是一种避免重复相同计算的方法。每当有一个昂贵的计算,并且你期望重复使用这个计算的结果时,存储(缓存)这个结果并在下次需要时重新使用它可能是一个好主意,特别是如果这个值不经常变化的话。
Consider the values computed in IntersectRaySphere, where a raytracer typically spends most of its time:
考虑在 IntersectRaySphere 中计算的值,射线追踪器通常在这里花费大部分时间。
a = dot(D, D) b = 2 * dot(OC, D) c = dot(OC, OC) - r * r
Different values are immutable during different periods of time.
在不同的时期,不同的价值是不可改变的。
Once you load the scene and you know the size of the spheres, you can compute r * r. That value won’t change unless the size of the spheres changes.
一旦你加载了场景并且知道了球体的大小,你就可以计算出 r * r 。除非球体的大小发生变化,否则这个值不会改变。
Some values are immutable for an entire frame, at the very least. One such value is dot(OC, OC) and it only needs to change between frames if the camera or a sphere moves. (Note that shadows and reflections trace rays that don’t start at the camera, so some care is needed to make sure the cached value isn’t used in that case.)
有些值至少在一整帧内是不可改变的。一个这样的值是 dot(OC, OC) ,它只需要在摄像机或球体移动时在帧之间改变。(注意,阴影和反射所追踪的光线并不是从摄像机开始的,所以需要注意确保缓存的值不会在这种情况下被使用)。
Some values don’t change for an entire ray. For example, you can compute dot(D, D) in ClosestIntersection and pass it to IntersectRaySphere.
有些值在整个射线中不会改变。例如,你可以在 ClosestIntersection 中计算 dot(D, D) 并将其传递给 IntersectRaySphere 。
There are many other computations that can be reused. Use your imagination! Not every cached value will make things faster overall, however, because sometimes the bookkeeping overhead might be higher than the time saved. Always use benchmarks to evaluate whether an optimization is actually helping.
还有许多其他的计算方法可以重复使用。发挥你的想象力吧!然而,并不是每一个缓存的值都能使事情变得更快,因为有时记账的开销可能比节省的时间要高。总是使用基准来评估一个优化是否真的有帮助。
Shadow Optimizations 影子优化
When a point of a surface is in shadow because there is another object in the way, it’s quite likely that the point right next to it will also be in the shadow of the same object (this is called shadow coherence). You can see an example of this in Figure 5-2.
当一个表面的一个点因为有另一个物体挡住而处于阴影中时,很可能紧挨着它的那个点也会处于同一个物体的阴影中(这叫做阴影相干性)。你可以在图5-2中看到这样一个例子。
When searching for objects between the point and the light, to determine whether the point is in shadow, we’d normally check for intersections with every other object. However, if we know that the point immediately next to it is in the shadow of a specific object, we can check for intersections with that object first. If we find one, we’re done and we don’t need to check every other object! If we don’t find intersections with that object, we just revert back to checking every object.
在搜索点和光之间的物体时,为了确定该点是否在阴影中,我们通常会检查与其他每个物体的交集。然而,如果我们知道紧挨着它的那个点处于某个特定物体的阴影中,我们就可以先检查与该物体的交叉点。如果我们找到一个,我们就完成了,我们不需要检查其他所有的物体!如果我们没有找到交叉点,我们就可以检查其他物体。如果我们没有找到与该物体的交叉点,我们就恢复到检查每个物体。
In the same vein, when looking for ray-object intersections to determine whether a point is in shadow, you don’t really need the closest intersection; it’s enough to know that there’s at least one intersection, because that will be enough to stop the light from reaching the point! So you can write a specialized version of ClosestIntersection that returns as soon as it finds any intersection. You also don’t need to compute and return closest_t; instead, you can return just a Boolean value.
同样地,当寻找射线-物体的交点来确定一个点是否在阴影中时,你并不真的需要最近的交点;只要知道至少有一个交点就够了,因为这足以阻止光线到达这个点!所以你可以写一个专门的 ClosestIntersection ,只要找到任何交点就返回。所以你可以写一个 ClosestIntersection 的专门版本,一旦找到任何交点就立即返回。你也不需要计算并返回 closest_t ;相反,你可以只返回一个布尔值。
Spatial Structures 空间结构
Computing the intersection of a ray with every sphere in the scene is somewhat wasteful. There are many data structures that let you discard entire groups of objects at once without having to compute the intersections individually.
计算射线与场景中每一个球体的交点是有些浪费的。有许多数据结构可以让你一次抛弃整组物体,而不需要单独计算交点。
Suppose you have several spheres close to each other. You can compute the center and radius of the smallest sphere that contains all these spheres. If a ray doesn’t intersect this bounding sphere, you can be sure that it doesn’t intersect any of the spheres it contains, at the cost of a single intersection test. Of course, if it does, you still need to check whether it intersects any of the spheres it contains.
假设你有几个相互靠近的球体。你可以计算出包含所有这些球体的最小的球体的中心和半径。如果一条射线不与这个边界球体相交,你就可以确定它不与它所包含的任何球体相交,而只需进行一次相交测试。当然,如果它相交了,你仍然需要检查它是否与它所包含的任何球体相交。
You could go further and have several levels of bounding spheres (that is, groups of groups of spheres), forming a hierarchy that needs to be traversed all the way to the bottom only when there’s a good chance that one of the actual spheres will be intersected by a ray.
你可以更进一步,有几个层次的边界球体(也就是球体组的组),形成一个层次结构,只有在实际球体之一很有可能被射线相交的时候,才需要一直穿越到底部。
While the exact details of this family of techniques are outside the scope of this book, you can find more information under the name bounding volume hierarchy.
虽然这个技术系列的确切细节不在本书的范围内,但你可以在边界体积层次的名称下找到更多的信息。
Subsampling 子采样
Here’s an easy way to make your raytracer times faster: compute times fewer pixels!
这里有一个简单的方法可以使你的光线跟踪器快N𝑁倍:计算N𝑁倍的像素
For each pixel in the canvas, we trace one ray through the viewport to sample the color of the light coming from that direction. If we had fewer rays than pixels, we’d be subsampling the scene. But how can we do this and still render the scene correctly?
对于画布上的每个像素,我们通过视口追踪一条光线,对来自该方向的光线的颜色进行采样。如果我们的光线比像素少,我们就会对场景进行子采样。但是,我们怎样才能做到这一点,并且仍然正确地渲染场景呢?
Suppose you trace the rays for the pixels and , and they happen to hit the same object. You can reasonably assume that the ray for the pixel will also hit the same object, so you can skip the initial search for intersections with all the objects in the scene and jump straight to computing the color at that point.
假设你追踪像素(10,100)( 10 , 100)和(12,100)( 12 , 100)的光线,而它们恰好击中了同一个物体。你可以合理地假设像素(11,100)( 11 , 100)的光线也会碰到同一个物体,所以你可以跳过最初寻找与场景中所有物体的交叉点的过程,直接跳到计算该点的颜色。
If you skip every other pixel in both the horizontal and vertical directions, you could be doing up to 75 percent fewer primary ray-scene intersection computations—that’s a 4x speedup!
如果你在水平和垂直方向上跳过每一个其他像素,你可以少做75%的初级射线场景相交计算--那是4倍的速度!
Of course, you may well miss a very thin object; this is an “impure” optimization, in the sense that, unlike the ones discussed before, it results in an image that closely resembles, but is not guaranteed to be identical to, the image without the optimization. In a way, it’s “cheating” by cutting corners. The trick is to know what corners can be cut while maintaining satisfactory results; in many areas of computer graphics, what matters is the subjective quality of the results.
当然,你很可能会错过一个非常薄的物体;这是一个 "不纯 "的优化,在这个意义上,与之前讨论的那些不同,它导致的图像与没有优化的图像非常相似,但不能保证完全相同。在某种程度上,它是通过切角来 "作弊"。诀窍是知道在保持令人满意的结果的同时可以减少哪些角落;在计算机图形的许多领域,重要的是结果的主观质量。
Supporting Other Primitives 支持其他基元
In the previous chapters, we’ve used spheres as primitives because they’re mathematically easy to manipulate; that is, the equations to find the intersections between rays and spheres are relatively simple. But once you have a basic raytracer than can render spheres, adding support to render other primitives doesn’t require much additional work.
在之前的章节中,我们使用球体作为基元,因为它们在数学上容易操作;也就是说,寻找射线和球体之间的交点的方程相对简单。但是,一旦你有了一个能够渲染球体的基本光线跟踪器,增加对渲染其它基元的支持并不需要太多的额外工作。
Note that TraceRay needs to be able to compute just two things for a ray and any given object: the value of for the closest intersection between them and the normal at that intersection. Everything else in the raytracer is object-independent!
请注意,对于一条射线和任何给定的物体, TraceRay 只需要能够计算两件事:它们之间最近的交点的t𝑡值和该交点的法线。射线追踪器中的其他东西都是与物体无关的!
Triangles are a good primitive to support. A triangle is the simplest possible polygon, so you can build any other polygon out of triangles. They’re mathematically easy to manipulate, so they’re a good way to represent approximations of more complex surfaces.
三角形是一个很好的基元,可以支持。三角形是最简单的多边形,所以你可以从三角形中建立任何其他的多边形。它们在数学上很容易操作,所以它们是表示更复杂表面的近似值的好方法。
To add triangle support to the raytracer, you only need to change TraceRay. First, you compute the intersection between the ray (given by its origin and direction) and the plane that contains the triangle (given by its normal and its distance from the origin).
要在光线追踪器中添加三角形支持,你只需要改变 TraceRay 。首先,你要计算射线(由其原点和方向给出)与包含三角形的平面(由其法线和与原点的距离给出)之间的交点。
Since planes are infinitely big, rays will almost always intersect any given plane (except if they’re exactly parallel). So the second step is to determine whether the ray-plane intersection is actually inside the triangle. There are many ways to do this, including using barycentric coordinates or using cross-products to check whether the point is “on the inside” with respect to each of the three sides of the triangle.
由于平面是无限大的,射线几乎总是会与任何给定的平面相交(除非它们是完全平行的)。因此,第二步是确定射线与平面的交点是否真的在三角形内。有很多方法可以做到这一点,包括使用重心坐标或使用交叉积来检查该点相对于三角形的三条边是否 "在里面"。
Once you have determined that the point is inside the triangle, the normal at the intersection is just the normal of the plane. Have TraceRay return the appropriate values and no further changes will be required!
一旦你确定该点在三角形内,交点处的法线就只是平面的法线。让 TraceRay 返回适当的值,就不需要再做任何改动了
Constructive Solid Geometry 构造性的实体几何学
Suppose we want to render objects more complicated than spheres or curved objects that are difficult to model accurately using a set of triangles. Two good examples are lenses (like the ones in magnifying glasses) and the Death Star (that’s no moon . . . ).
假设我们想渲染比球体更复杂的物体,或者使用一组三角形难以准确建模的弯曲物体。两个很好的例子是透镜(比如放大镜中的透镜)和死星(那不是月亮......)。
We can easily describe these objects in plain language. A magnifying glass looks like two slices of a sphere glued together; the Death Star looks like a sphere with a smaller sphere taken out of it.
我们可以很容易地用通俗的语言来描述这些物体。一个放大镜看起来像两片粘在一起的球体;死星看起来像一个球体,里面取出了一个更小的球体。
We can express this more formally as the result of applying set operations (such as union, intersection, or difference) to other objects. Continuing with our examples above, a lens can be described as the intersection of two spheres and the Death Star as a big sphere from which we subtract a smaller sphere (see Figure 5-3).
我们可以更正式地将其表达为对其他对象应用集合运算(如并集、交集或差集)的结果。继续我们上面的例子,透镜可以被描述为两个球体的交点,而死星则是一个大球体,我们从中减去一个小球体(见图5-3)。
You might be thinking that computing Boolean operations of solid objects is a very tricky geometrical problem. And you’d be completely correct! Fortunately, it turns out that constructive solid geometry lets us render the results of set operations between objects without ever having to explicitly compute these results!
你可能在想,计算固体物体的布尔运算是一个非常棘手的几何问题。而你的想法是完全正确的!幸运的是,事实证明,构造性实体几何可以让我们呈现物体之间的集合运算结果,而不需要明确计算这些结果!
How can we do this in our raytracer? For every object, you can compute the points where the ray enters and exits the object; in the case of a sphere, for example, the ray enters at and exits at . Suppose you want to compute the intersection of two spheres; the ray is inside the intersection when it’s inside both spheres, and it’s outside when it’s outside either sphere. In the case of the subtraction, the ray is inside when it’s inside the first object but not the second one. For the union of two objects, the ray is inside when it’s inside either of the objects.
我们如何在我们的光线追踪器中做到这一点呢?对于每一个物体,你都可以计算出射线进入和离开物体的点;以球体为例,射线在min( t 1 , t 2 ) 𝑚 𝑖 𝑛 ( 𝑡 1 , 𝑡 2 ) 进入,在max( t 1 , t 2 ) 𝑚 𝑎 𝑥 ( 𝑡 1 , 𝑡 2 ) 退出。假设你想计算两个球体的交点,当射线在两个球体内时,它就在交点内,当它在任何一个球体外时,它就在交点外。在减法的情况下,当射线在第一个物体内而不在第二个物体内时,它就在里面。对于两个物体的结合,当射线在其中一个物体内时,它就在里面。
More generally, if you want to compute the intersection between a ray and the object (where is any set operation), you first compute the intersection between the ray and and separately, which gives you the ranges of that are “inside” for each object, and . Then you compute , which is the “inside” range for . Once you have this, the closest intersection between the ray and is the smallest value of that is both in the “inside” range of the object, and between and . Figure 5-4 shows the inside range for the union, intersection, and subtraction of two spheres.
更一般地说,如果你想计算一条射线与物体A⨀B 𝐴⨀ 𝐵之间的交点(其中⨀ ⨀是任何集合操作)。你首先要分别计算射线与A𝐴和B𝐵的交点,这就给出了每个物体 "内部 "的t𝑡范围,即R A𝑅 𝐴和R B𝑅 𝐵。然后你计算R A ⨀ R B 𝑅 𝐴 ⨀ 𝑅 𝐵,这是A⨀B 𝐴 ⨀ 𝐵的 "内部 "范围。一旦你得到这个结果,射线与A⨀B 𝐴⨀ 𝐵之间的最近交点就是t𝑡的最小值,它既在物体的 "内部 "范围内,又在t min 𝑡 𝑚 𝑖 𝑛和t max 𝑡 𝑚 𝑎 𝑥。图5-4显示了两个球体的联合、相交和相减的内部范围。
The normal at the intersection is either the normal of the object that produced the intersection or its opposite, depending on whether you’re looking at the “outside” or “inside” of the original object.
交点处的法线要么是产生交点的物体的法线,要么是它的反面,这取决于你看的是原始物体的 "外面 "还是 "里面"。
Of course, and don’t have to be primitives; they can be the result of set operations themselves! If you implement this cleanly, you don’t even need to know what and are, as long as you can get intersections and normals out of them. This way you can take three spheres and compute, for example, .
当然,A𝐴和B𝐵不一定是基元;它们可以是集合操作本身的结果如果你能干净利落地实现这一点,你甚至不需要知道A𝐴和B𝐵是什么,只要你能从它们那里得到交点和法线。这样你就可以拿三个球体来计算,例如,(A∪B)∩C ( 𝐴 ∪ 𝐵 ) ∩ 𝐶。
Transparency 透明度
So far we have rendered every object as if it were fully opaque, but this doesn’t need to be the case. We can render partially transparent objects, like a fishbowl.
到目前为止,我们已经把每个对象都渲染成完全不透明的,但这并不需要是这样。我们可以渲染部分透明的物体,比如鱼缸。
Implementing this is quite similar to implementing reflection. When a ray hits a partially transparent surface, you compute the local and reflected color as before, but you also compute an additional color—the color of the light coming through the object, obtained with another call to TraceRay. Then you blend this color with the local and reflected colors, depending on how transparent the object is, much in the same way we did when computing object reflections.
实现这一点与实现反射很相似。当光线照射到一个部分透明的表面时,你会像以前一样计算局部和反射的颜色,但你也会计算一个额外的颜色--通过物体射入的光线的颜色,通过另一个调用 TraceRay 获得。然后,根据物体的透明程度,将这种颜色与局部和反射的颜色相混合,这与我们计算物体反射时的方式相同。
Refraction 折射
In real life, when a ray of light goes through a transparent object, it changes direction (this is why when you submerge a straw in a glass of water, it looks “broken”). More precisely, a ray of light changes direction when it’s going through a material (such as air) and enters a different material (such as water).
在现实生活中,当一束光穿过透明的物体时,它就会改变方向(这就是为什么当你把一根吸管浸入水杯时,它看起来是 "破碎的")。更准确地说,当光线穿过一种材料(如空气)并进入另一种材料(如水)时,它就会改变方向。
The way the direction changes depends on a property of each material, called its refraction index, according to the following equation, called Snell’s Law:
方向变化的方式取决于每种材料的一种属性,称为其折射率,根据以下公式,称为斯奈尔定律。
Here, and are the angles between the ray and the normal before and after crossing the surface, and and are the refraction indices of the material outside and inside objects.
这里,α 1 𝛼 1和α 2 𝛼 2是射线穿过表面前后与法线的夹角,n 1 𝑛 1和n 2 𝑛 2是物体外部和内部材料的折射率。
For example, is approximately , and is approximately . So for a ray of light entering water at a angle, we have
例如,n空气𝑛 𝑎 𝑖 𝑟约为1.0 1.0,n水 𝑛 𝑤 𝑎 𝑡 𝑒 𝑟约为1.33 1.33。因此,对于以60∘60∘角度进入水中的一束光,我们有
This example is shown in Figure 5-5.
这个例子在图5-5中显示。
At the implementation level, each ray would have to carry an additional piece of information: the refraction index of the material it is currently going through. When the ray intersects a partially transparent object, you compute the new direction of the ray from that point, based on the refraction indices of the current material and the new material, and then proceed as before.
在实现层面上,每条射线都必须携带一个额外的信息:它当前所经过的材料的折射率。当射线与一个部分透明的物体相交时,你要根据当前材料和新材料的折射率计算出射线的新方向,然后再像以前一样进行。
Stop for a moment to consider this: if you implement constructive solid geometry and transparency, you can model a magnifying glass (the intersection of two spheres) that will behave like a physically correct magnifying glass!
停下来考虑一下:如果你实现了建设性的实体几何学和透明度,你可以建立一个放大镜(两个球体的交点)的模型,它的行为就像物理上正确的放大镜一样
Supersampling 超采样
Supersampling is more or less the opposite of subsampling. In this case you’re looking for accuracy instead of performance. Suppose the rays corresponding to two adjacent pixels hit different objects. You would paint each pixel with the corresponding colors.
超采样或多或少与次采样相反。在这种情况下,你所追求的是精度而不是性能。假设两个相邻的像素所对应的射线击中了不同的物体。你会给每个像素点涂上相应的颜色。
But remember the analogy that got us started: each ray is supposed to determine the “representative” color for each square of the “grid” we’re looking through. By using a single ray per pixel, we’re arbitrarily deciding that the color of the ray of light that goes through the middle of the square is representative of the whole square, but that may not be true.
但请记住让我们开始的那个类比:每条光线都应该决定我们所看的 "网格 "的每个方块的 "代表 "颜色。通过对每个像素使用一条光线,我们武断地决定穿过正方形中间的光线的颜色是代表整个正方形的,但这可能不是真的。
The way to solve this is just to trace more rays per pixel—4, 9, 16, as many as you want—and then average them to get the color for the pixel.
解决这个问题的方法就是在每个像素上追踪更多的光线--4、9、16,你想要多少就有多少--然后将它们平均化,得到像素的颜色。
Of course, this makes your raytracer 4, 9, or 16 times slower, for the exact same reasons why subsampling made it times faster. Fortunately, there’s a middle ground. You can assume object properties change smoothly over their surface, so shooting four rays per pixel that hit the same object at very slightly different positions may not improve the scene much. So you can start with one ray per pixel and compare adjacent rays: if they hit different objects or if the color differs by more than a certain threshold, you apply pixel subdivision to both.
当然,这使得你的光线跟踪器慢了4倍、9倍或16倍,原因与子采样使它快了N𝑁倍完全相同。幸运的是,有一个中间地带。你可以假设物体的属性在其表面上是平滑变化的,所以每像素拍摄四条射线,在非常轻微的不同位置击中同一个物体,可能不会对场景有太大的改善。因此,你可以从每个像素的一条射线开始,比较相邻的射线:如果它们击中了不同的物体,或者颜色的差异超过了某个阈值,你就对这两条射线应用像素细分法。
Summary
In this chapter, we have briefly introduced several ideas you can explore by yourself. These modify the basic raytracer we’ve been developing in new and interesting ways—making it more efficient, able to represent more complex objects, or modeling rays of light in a way that better approximates our physical world.
在本章中,我们简单介绍了几个你可以自己探索的想法。这些想法以新的和有趣的方式修改了我们一直在开发的基本光线追踪器,使其更有效率,能够表示更复杂的物体,或者以更好地接近我们的物理世界的方式来模拟光线。
This first part of the book should be proof that raytracers are beautiful pieces of software that can produce stunningly beautiful images using nothing but straightforward, intuitive algorithms and simple math.
本书的第一部分应该证明,射线追踪器是一种美丽的软件,只需使用直接、直观的算法和简单的数学就能产生令人惊叹的美丽图像。
Sadly, this purity comes at a cost: performance. While there are numerous way to optimize and parallelize raytracers, as discussed in this chapter, they’re still too computationally expensive for real-time performance; and while hardware gets faster every year, some applications demand pictures 100 times faster—with no loss in quality. Of all these applications, games are the most demanding: we expect picture-perfect images drawn at least 60 times per second. Raytracers just don’t cut it.
可悲的是,这种纯洁性是有代价的:性能。虽然有很多方法可以优化和并行化光线追踪器,正如本章所讨论的那样,但对于实时性能来说,它们的计算成本还是太高了;虽然硬件的速度每年都在提高,但有些应用对图片的速度要求是100倍,而且质量不减。在所有这些应用中,游戏的要求最高:我们期望每秒至少绘制60次完美的图像。光线跟踪器根本无法满足这一要求。
How have videogames been doing it since the early 90s, then?
那么,从90年代初开始,电子游戏是如何做到的呢?
答案在于一个完全不同的算法系列,我们将在本书的第二部分进行探讨。