cs184-adu
In this project, I extended my pathtracer to support more complicated materials, including glass and mirrors, and added environment lighting and depth of field effects. Whereas the first project was about setting up the core pathtracing infrastructure, this project was about modeling complex surfaces to produce truly striking results.
The main ideas in this project are conceptually simple; the main implementation challenge was in efficient sampling techniques. This project was at times frustrating because the calculations are complex, making debugging difficult. The results, however, are stunning and highly rewarding. In the process, I gained insight into the fascinating field of material modeling and gained a greater appreciation for abilities of pathtracing.
The main idea here is to implement the proper BSDF (bidirectional scattering distribution function). The BSDF is a generalization of the BRDF, as it additionally includes a BTDF, or bidirectional transmittance distribution function. We used a diffuse (constant) BSDF in the last project; now we support more complex ones.
We support mirror surfaces by implementing MirrorBSDF::sample_f()
, which returns a sample of the surface BSDF. Given outgoing direction wo
, we get wi
by simply reflecting wo
about the surface normal. This is trivial in the object coordinate space that BSDF calculations occur in; since the axis lies along the normal, we need only negate the and coordinates.
The pdf
is just since it is a delta BSDF; there is no randomness involved. We return , the reflectance, as the sampled value. Note that we scale the reflectance by the reciprocal of so as to cancel out the cosine term used by the parent function at_least_one_bounce_radiance()
; a perfect mirror does not produce Lambertian falloff.
To implement materials like glass, we first need to support refraction. We implement this in BSDF::refract()
, which calculates given in the object reference frame. Using Snell’s law, it can be shown that the correct equations are
where is the ratio of the old index of refraction to new index of refraction; we compute in the code according to whether the ray is entering or exiting the non-air material. The indicates that we take the opposite sign of . Note that if the radicand is negative, this is called total internal reflection.
Using refraction, we can now implement GlassBSDF::sample_f()
. Both reflection and refraction occur at an intersection with a glass material - we use a simple trick called Schlick’s approximation to give a coin-flip probability of either reflecting or refracting. By Schlick’s approximation, the ratio of reflection energy to refraction energy is
where is the angle between the incident light and the normal, and are the indices of refraction of the two media. We assume since one of the media is always air.
We do the same as a mirror BSDF if total internal reflection occurs or coinflip(R)
; otherwise we refract wo
to *wi
. For both cases, we weight the output Spectrum and pdf by the appropriate Schlick coefficient or .
This part of the project was fairly straightforward and I didn’t run into any trouble.
Below we show a sequence of renderings of the scene CBspheres.dae
with different max_ray_depth
values. The left sphere has a mirror surface and the right sphere has a glass surface.
Beginning with max_ray_depth=0
, only the light source is visible since all other light involves at least one bounce.
With max_ray_depth=1
(left), we see the effect of direct lighting. The spheres are dark because both reflection and refraction involve more than once bounce. With max_ray_depth=2
(right), the mirror ball appears because two bounces are needed for light to strike the ball and then the box.
With max_ray_depth=3
(left), the effect of refraction in the glass ball appears, since rays can enter the sphere, exit the sphere, and strike the box. The mirror reflection is 1 bounce behind, so to speak, so the glass ball reflection is dark. With max_ray_depth=4
(right), we see the caustic from light passing through the glass sphere since 4 bounces are needed (floor, into sphere, out of sphere, and box).
With max_ray_depth=5
(left), light can reflect off of the mirror ball and enter the glass sphere from the side, producing a small light patch on the blue wall. With max_ray_depth=100
(right), the multi-bounce effects have converged and the only noticeable difference is a slightly brighter scene due to the accumulation of indirect light.
Here is a high-res render of CBlucy.png
that took about 3 hours.
We now add support for certain microfacet models. A microfacet material is one whose surface is composed of many very tiny facets, each of which is a perfect specular reflector. In this project, we support isotropic rough conductors that reflect (not refract).
We implement the microfacet BRDF in MicrofacetBSDF::f()
. This function is given by
is the surface normal, which is in object coordinates. is the half-vector. is the normal distribution function, which we implement in MicrofacetBSDF::D()
; we use the Beckmann distribution given by
where is the roughness of the macro surface (the smaller, the smoother) and is the angle between and .
Lastly, is the Fresnel term implemented in MicrofacetBSDF::F()
. It returns a Spectrum according to the approximation that each channel has a fixed wavelength:
where and represent indices of refraction for conducting materials; both are Spectrums encoding values at 614nm, 549nm, and 466nm. This website has a collection of and values for various materials at a given wavelength.
Cosine hemisphere sampling works well for a diffuse BRDF, but is highly inefficient for the Beckmann NDF due to its strongly preferential directions. Accordingly, we implement a form of importance sampling in MicrofacetBSDF::sample_f()
.
The implementation idea is to sample a and from some pdfs and , respectively; combine them to form the sampled microfacet normal ; and reflect about to get the sampled .
The pdfs we use resemble the Beckmann NDF:
We use the inversion method to sample them. We then recover by converting the and into coordinates. is the reflection of about :
The pdf value is also calculated appropriately (derivation is tricky and omitted here).
We then return the BRDF evaluated at the sampled values.
I had some numerical issues in this part of the project, which I resolved by ensuring that all terms in the denominator of the BSDF are positive.
To illustrate the improvement gained from importance sampling, we compare two images of the scene CBbunny_microfacet_cu.dae
rendered using cosine hemisphere sampling (left) and importance sampling (right). Both images were created with 64 samples per pixel, 1 sample per light, and 7 bounces.
Cosine hemisphere sampling gives a noisier result because it samples uniformly while importance sampling distributes samples where light is expected to have greater influence, thus producing better results for a given number of samples.
To illustrate the effect of on the microfacet surface, we compare four renderings of CBdragon_microfacet_au.dae
with different values. All images were rendered with 1024 samples per pixel, 4 samples per light, and 7 bounces.
From left to right, top to bottom, we have . The value controls the roughness of the material, as given by the spread of the distribution of normals. The lower the value, the smoother the surface - resulting in a glossier appearance.
We can modify the and values to create any desired conductor material. Below, we attempt to model a bunny made out of mercury. The corresponding values (as given by the website above) are
Property | |||
---|---|---|---|
2.0733 | 1.1680 | 1.4612 | |
5.3383 | 4.0572 | 4.5190 |
which results in
In this part, we introduce environment light, which is an infinitely far away light source supplying radiance from all directions. The incoming light from each direction is defined in a texture map .exr
file, which we parametrize by and .
We do this in order to simulate more realistic lighting conditions, like outside settings on Earth where the entire sky is lit up from an extremely distant light source.
We first implement EnvironmentLight::sample_dir()
to sample a Spectrum from an environment map at a particular direction. We simply use the helper functions to convert the direction vector into and coordinates, then and coordinates that can be used to index into the envMap
. We then perform bilinear interpolation to sample a Spectrum at that point.
We first implement EnvironmentLight::sample_L()
with uniform sampling, by getting a random direction on the sphere (i.e. pdf of ) and converting it into environment map coordinates to be sampled with bilinear interpolation.
We can improve the efficiency of light sampling given that most of energy is concentrated in the directions toward bright light sources.
The implementation idea is to convert the environment map into a 2D probability density function over . We represent this function as a 2D array that piecewise constant over the rectangle .
Without derivation, the pdf we use is
where represents an index into the environment map. Our first implementation step is to enforce
by properly normalizing by the sum in EnvironmentLight::init()
.
The sampling strategy is as follows: sample a row of the environment map using the marginal distribution ; sample a pixel within that row using the conditional distribution ; then convert that to a direction vector and return the appropriate radiance and pdf values.
We compute the cumulative marginal distribution in a double for loop as
and store it in marginal_y
. Then we compute the cumulative conditional distribution function as a double for loop as
and store it in row-major form in conds_y
.
We now update EnvironmentLight::sample_L()
to perform importance sampling as follows: first, generate a uniform 2D sample on . We get a row by finding the first j
for which marginal_y[j]
is greater than the first sample and we get a column by finding the first i
for which conds_y[envMap->w * row + i]
is greater than the second sample.
We convert the row and column into a direction () and then and indices with the same helper functions as earlier; and return the environment map value at using bilinear interpolation. We also make sure to compute the pdf (derivation tricky, omitted here).
For results, we will work with the following .exr
file (shown here as converted to jpg):
The probability debug (visualization of the computed marginal and conditional distributions) is shown below.
I had a lot of trouble with debugging this part of the project, mostly due to an incorrect probability calculation. I had to take great care to convert the cumulative marginal distribution into the form expected by the cumulative conditional distribution equation.
The improvement importance sampling offers is illustrated below; both images are of bunny_unlit.dae
with the environment map shown above with 4 samples per pixel and 64 samples per light.
The noise levels are noticeably lower in the importance-sampled render on the right, most notably in the bunny's shadow.
Doing the same comparison for bunny_microfacet_cu_unlit.dae
, we have
Again, we see fewer speckles in the image produced by importance sampling.
Here is an additional rendering of bunny_microfacet_cu_unlit.dae
in another environment map just for fun.
Up until now, our pathtracer used a pin-hole camera. All light entered through a virtual point, so everything was in focus. Real cameras, however, have finite apertures so that objects are in focus only if they’re in a plane focalDistance
from the lens. We can use a thin lens approximation, i.e. ignore any effects due to the thickness of the lens, which is assumed to be negligible.
We use the following model to guide the implementation:
In this task, we generate the blue ray in the figure above. The implementation overview is to generate the red ray using the same technique as in the previous project; uniformly sample the disk representing the thin lens; calculate the focus point by using the focalDistance
value and the red ray direction; generate a ray from the sampled disk point to the focus point; then properly perform camera-to-world conversion this ray.
The following renders show CBdragon.dae
at various depths for a fixed lensRadius=0.01
. From right to left, top to bottom, we show focalDistance=0.7, 0.9, 1.2, 2.5
.
As we increase the focalDistance
, the focus plane shifts from the tip of the dragon’s nose at 0.7
all the way to back corner at 2.5
.
The following renders show CBlucy.dae
at various apertures for a fixed focalDistance=1.0
. From right to left, top to bottom, we show lensRadius=0.0, 0.2, 0.4, 0.6
.
The entire scene is in focus at lensRadius=0.0
, as this is just the pinhole camera case. As we increase the lensRadius
, areas not at the focalDistance
of 1.0
become increasingly out-of-focus.
Some additional renders of CBspheres_microfacet_al_ag.dae
at different focalDistance
values are shown for fun.