To get started: Clone this repository and its submodule using
git clone --recursive http://github.com/alecjacobson/computer-graphics-ray-casting.git
Do not fork: Clicking "Fork" will create a public repository. If you'd like to use GitHub while you work on your assignment, then mirror this repo as a new private repository: https://stackoverflow.com/questions/10065526/github-how-to-make-a-fork-of-public-repository-private
We will cover basic shading, shadows and reflection in the next assignment.
This assignment will introduce a few primitives for 3D geometry: spheres, planes and triangles. We'll get a first glimpse that more complex shapes can be created as a collection of these primitives.
The core interaction that we need to start visualizing these shapes is ray-object intersection. A ray emanating from a point (e.g., a camera's "eye") in a direction can be parameterized by a single number . Changing the value of picks a different point along the ray. Remember, a ray is a 1D object so we only need this one "knob" or parameter to move along it. The parametric function for a ray written in vector notation is:
For each object in our scene we need to find out:
- is there some value such that the ray lies on the surface of the object?
- if so, what is that value of (and thus what is the position of
intersection
$\mathbf{r}(t)\in \mathbb{R}^3 $ - and what is the surface's unit normal vector at the point of intersection.
For each object, we should carefully consider how many ray-object intersections are possible for a given ray (always one? sometimes two? ever zero?) and in the presence of multiple answers choose the closest one.
Question: Why keep the closest hit?
Hint: 🤦🏻
In this assignment, we'll use simple representations for primitives. For example, for a plane we'll store a point on the plane and the normal anywhere on the plane.
Question: How many numbers are needed to uniquely determine a plane?
Hint: A point position (3) + normal vector (3) is too many. Consider how many numbers are needed to specify a line in 2D.
In this assignment we will pretend that our "camera" or "eye" looking into the
scene is shrunk to a single 3D point in space. The
image rectangle (e.g., 640 pixels by 360 pixels) is placed so the image center
is directly in front of the
"eye" point at a certain "focal
length" . The image of pixels is
scaled to match the given width
and height
defined by the camera
. Camera
is equipped with a direction that moves left-right across the image
, up-down , and from the "eye" to the image
. Keep in mind that the width
and height
are measure in the
units of the scene, not in the number of pixels. For example, we can fit a
1024x1024 image into a camera with width and height .
Note: The textbook puts the pixel coordinate origin in the bottom-left, and uses as a column index and as a row index. In this assignment; the origin is in the top-left, is a row index, and is a column index.
Question: Given that points right and points up, why does minus point into the scene?
Hint: ☝️
Triangles are the simplest 2D polygon. On the computer we can represent a triangle efficiently by storing its 3 corner positions. To store a triangle floating in 3D, each corner position is stored as 3D position.
A simple, yet effective and popular way to approximate a complex shape is to store list of (many and small) triangles covering the shape's surface. If we place no assumptions on these triangles (i.e., they don't have to be connected together or non-intersecting), then we call this collection a "triangle soup".
When considering the intersection of a ray and a triangle soup, we simply need to find the first triangle in the soup that the ray intersects first.
Our scene does not yet have light so the only accurate rendering would be a pitch black image. Since this is rather boring, we'll create false or pseudo renderings of the information we computed during ray-casting.
The simplest image we'll make is just assigning each object to a color. If a
pixel's closest hit comes from the -th object then we paint it with the -th
rgb color in our color_map
.
The object ID image gives us very little sense of 3D. The simplest image to encode the 3D geometry of a scene is a depth image. Since the range of depth is generally where is the distance from the camera's eye to the camera plane, we must map this to the range to create a grayscale image. In this assignment we use a simple non-linear mapping based on reasonable default values.
The depth image technically captures all geometric information visible by casting rays from the camera, but interesting surfaces will appear dull because small details will have nearly the same depth. During ray-object intersection we compute or return the surface normal vector at the point of intersection. Since the normal vector is unit length, each coordinate value is between . We can map the normal vector to an rgb value in a linear way (e.g., ).
Although all of these images appear cartoonish and garish, together they reveal that ray-casting can probe important pixel-wise information in the 3D scene.
In this assignment you will implement core routines for casting rays into a 3D and collect "hit" information where they intersect 3D objects.
This assignment uses the Eigen for numerical
linear algebra. This library is used in both professional and academic numerical
computing. We will use its Eigen::Vector3d
as a double-precision 3D vector
class to store data for 3D points and 3D vectors. You can add (+
)
vectors and points together, multiply them against scalars (*
) and compute
vector dot products (a.dot(b)
).
In addition, #include <Eigen/Geometry>
has useful geometric functions such as
3D vector cross product
(a.cross(b)
).
See computer-graphics-raster-images.
Construct a viewing ray given a camera and subscripts to a pixel.
Find the first (visible) hit given a ray and a collection of scene objects
Intersect a sphere with a ray.
Intersect a plane with a ray.
Intersect a triangle with a ray.
Intersect a triangle soup with a ray.
Pro Tip: Mac OS X users can quickly preview the output images using
./raycasting && qlmanage -p {id,depth,normal}.ppm
Flicking the left and right arrows will toggle through the results
Pro Tip: After you're confident that your program is working correctly, you can dramatic improve the performance simply by enabling compiler optimization:
mkdir build-release cd build-release cmake ../ -DCMAKE_BUILD_TYPE=Release make