Subsurface Scattering

Implemented by Trevor Nesbitt, Becca Milman, and Beth Marrone

Abstract

The ray tracing algorithms we have implemented so far in this class have relied on the assumption that when light hits a material it either goes through or bounces off at the same place it entered. We used BRDFs to describe materials' reflectance. BRDF's take in an incoming vector and an outgoing vector and return the amount of incoming radiance that will reflect. This does not take subsurface scattering into consideration. Subsurface scattering occurs in translucent materials when light penetrates but does not always go through. When light enters a translucent material it goes in and reflects off the interior of the material multiple times before exiting again at different location than where it entered. In order to account for this, materials need to be modelled with a BSSRDF. This is a challenge to implement because in addition to integrating over all incoming directions and all outgoing directions for one point, we also need to integrate over an area where light could have scattered out at that point. To make this much faster, we chose to implement two approximations of subsurface scattering: the diffusion approximation and single-scattering. To model the diffusion effects, we created sample surface points and stored them in an octree data structure. We augmented our results by separating the color channels and adding a glossy BRDF

Technical Approach

Overview

We initially planned to use the Jensen 2001 paper in order to implement subsurface scattering, but the paper only gave a mathematical derivation of the approximations and vague suggestions on how to implement it. While Jensen did give us a solid theoretical base to start from, Physically Based Rendering by Pharr and Humphreys gave a much clearer approach to take. We used the pseudocode from the chapter as a guide, but edited it to fit in with our ray tracer. Some of the coefficients for error and scattering depth were too large or too small for our collada files, so we adjusted as necessary.

Sample Surface Points

The first step in the subsurface scattering pipeline is to collect several hundred to several thousand sample points on the surface. These points are essential to our diffusion approximation for multiple scattering. When we approximate multiple scattering for a single ray we’re tracing, we consider the irradiance from all of these points, theorizing that with subsurface scattering any of the light coming in from these points could be bounced into the ray we’re tracing.

We sampled surface points according to the guidelines in the Physically Based Rendering Textbook. To collect the points, we shoot several thousand rays into the scene, bounce them repeatedly around the scene, and collect their intersection points after a certain ray depth. When a ray intersects a part of the scene and the object we intersected has a BSSRDF, we collect the hit point’s position, irradiance, and area it represents. To calculate the area the point represents and to keep points evenly spaced, we used 3D Poisson Spheres (which are generally a good approximation for 2D Poisson disks). To calculate the irradiance at the point, we perform our standard direct lighting calculations. Finally, we bounce the ray into the scene again using a hemisphere sampler to pick our ray’s new direction. After repeating this process to collect several thousand candidate points, we add them to a list of final points to return if none of them intersect the Poisson spheres of the points already added.

We made a visualizer for our sampled points. Press ‘P’ to activate this visualizer and display the acquired points on the surface of the scene. Below you can see this visualizer in action, showing three sets of points collected with different Poisson sphere radii.

Poisson radius is 0.05, 423 points
Poisson radius is 0.03, 1183 points
Poisson radius is 0.01, 10,000 points

Octree

After all of the surface points are sampled, they are each inserted into an octree data structure. An octree node has 8 children, and each internal (non-leaf) node stores a luminance-weighted average of its children’s central point and irradiance values. It also stores the sum of its children’s area (which, at the leaves, is determined by area of the Poisson circles). Upon insertion, we navigate through the octree by comparing the to-be-inserted point to the midpoint of each internal node. By comparing x/y/z components, we get 2^3 = 8 possible children. If we reach a leaf that is not full, we insert the point into that leaf. If on the other hand the leaf is full, we convert the leaf to an internal node and re-insert each of the points originally associated with that leaf. When we convert the leaf to an internal node, the BBox of the child node has an x value, without loss of generality, ranging between the parent’s midpoint’s x-value to either the min or max x-value of the parent’s BBox. This logic is implemented in the octreeChildBound function.

After all points have been added to the octree, we recursively compute the irradiances, areas, and central points of the internal nodes via a luminance-weighted average of its children’s values. Upon query by the Mo function, we can adjust how deep into the Octree we go by specifying an error tolerance. The deeper into the octree we go, the more specific our diffuse approximations are - meaning the irradiance of fewer points influences the radiance at a query point. In terms of the Mo function described below, going deeper into the octree means we use a Pj (the stored midpoint for an internal node) that is likely closer to the query point, Ej (irradiance) is an average of less points, and Aj (area) will be smaller. Going deeper gives us less error, but also increases runtime.

High max error
Medium max error
Low max error

Mo

To calculate the outgoing radiance at an intersection point, the ray tracer’s direct lighting function calls the BSSRDF which uses its material properties to calculate outgoing irradiance. This irradiance is the radiant exitance, Mo, weighted by a fresnel term, assumed to be constant in a homogenous material. Mo is essentially all the contributing light coming from the area surrounding a point. Approximating what should be an integral as a sum:

Rd is the diffuse reflectance from the illumination single point, pj, on to the point we are calculating for, po. Since each point pj is really an area determined by our Poisson spheres, each point represents a small area with an average irradiance Ej and an area Aj. If there are many small areas, this can be a slow process to calculate each corresponding Rd. If the material does not vary much over a larger area, we can just use a larger area to calculate Rd. The min_dist and max_error are closely related, so they should be adjusted in tandem when looking for good results.

Visualizer

The octree visualizer uses similar functionality to the BVH visualizer. You can enter visualizer mode by pressing ‘O’. Keys 1-8 show children one through eight by pushing to a stack. Keyboard up returns to the parent by popping from stack. The points are colored according to which child they are. Here is a gif showing the octree visualizer:

BSSRDF - Coefficients

Each subsurface scattering material has set of coefficients that describe how much they scatter light, how much they absorb light, what color they are and what direction they scatter light in. The scattering coefficient is sigma_s, and it has three values, one for each of RGB. Similarly the absorption coefficient, sigma_a, has RGB components. G represents the average direction of scattering. If g is 0, the scattering is isotropic. If g > 0, the light scatters forward, away from the light. If g < 0, light scatters backward. We used the diffuse color to create a BRDF used indirect lighting. We gather this data from a couple of sources: the Jensen 2001 paper and from Autodesk’s “Optical Properties” article.

BSSRDF - Diffusion Approximation

The diffusion approximation component of our BSSRDF model approximates the output radiance for highly scattering materials. It is a way of simulating the limit of radiance as the number of scattering events reaches infinity. In order to calculate the diffuse approximation, we use a dipole model where one simulated point-light is directly above the hit-point of the incoming ray and one is directly beneath the hit-point, within the material (which is modeled as semi-infinite, meaning its z values range from -inf to 0 in object coordinates). This model gives us the isotropic scattering properties we desire. The height and depth of the simulated lights are functions of the material’s absorption and scattering coefficients as well as its index of refraction.

The dipole model is used in our Rd calculation described above. An image of this model is shown below. Note the simulated point light above, and the "real" point light below.

The hit-point mentioned in the previous paragraph is inputted as the midpoint of some octree node, and Rd uses the dipole model to output a weight representing how much the irradiance at the hit-point contributes to the radiance at a query point, as a function of the distance between the hit-point and the query point. When the simulated point-lights mentioned above are close to the surface, the output weight of Rd is higher. But it also has a faster exponential falloff, meaning that the influence of the hit-point’s irradiance on the query-point’s radiance falls off very quickly.

The output of the Rd function gets multiplied by the irradiance at the hit-point, the area covered by the hit-point (see octree section), and a Fresnel-transmittance factor (a function of the outgoing direction) to get the final output radiance for the query point.

In our implementation, we actually calculate the diffusion approximation three times for each query point - one for each R/G/B channel.

Below is our Jade dragon rendered with only the diffusion approximation. Notice how the dragon seems to have the appearance of light beneath its surface!

Single Scattering

Single scattering is when a ray enters a material, bounces once, and then exits the material. A 2D representation of this process is shown below.

We implemented single scattering according to the 2001 Jensen paper. The process is very similar to our implementation for direct lighting, but with a few twists.

When the ray intersects the scene, we find a new point below the surface by tracing the ray further past it’s hit point for some depth determined by the properties of the material. Then, from this point we just sample the lights in the scene. We trace a ray directly from our subsurface point to the light, instead of refracting it through the material. This way of sampling circumvents the difficulty of trying to find incoming rays that intersect the ray we’re tracing under the surface. Finally, we multiply the typical incoming irradiance from the direct lighting process by the single scattering coefficient shown below.

For the phase function p we used the Henyey-Greenstein phase function. We found that single scattering doesn’t create as strong of an effect as the diffusion approximation (as we would expect) and essentially acts as a replacement for our previous direct lighting, but with a brighter result, as if light was just below the surface. Below are some images that show single scattering at work.

Peter with skin BSSRDF and only single scattering
Exaggerated single scattering.

Other Implementation Details

We added functionality to our collada file parser to take in the necessary material coefficients (g, sigma_s, sigma_a, diffuse color and index of refraction). Additionally, all other parts of the pipeline had to be updated to carry this information. Starting from the .dae file, then to the parser, then to material_info, poly_mesh, mesh, primitive, and finally intersection. We reused many of the .dae files from previous projects, created and edited some in blender, and downloaded a few from online model bases.

To increase the quality of the final images, we also included combination BSSRDFs. Because we only use the BSSRDF for direct lighting, the BRDF could still be mirror, glass or diffuse. To get a specular jade dragon, we loaded in the BSSRDF of jade with a very low reflectance to create slight mirror BRDF properties.

Difficulties Encountered

We spent many hours rendering images that didn’t look the way we wanted them too. The reason for this is because there are so many variables that we had to adjust in order to get the image to look the way we wanted. In sampling the surface points, we had to adjust the number of points and the size of the Poisson spheres in tandem - doing this incorrectly could result in renders with strange brightness or noise. We could adjust the maximum radiance contribution from the single scattering component (if left uncapped, it was often too bright) as well as the ballpark-depth at which a scattering event occurred. We could also adjust the octree error tolerance, though we typically left this at 0, forcing each call of the Mo function to go to the leaves of the octree. Finally, we could adjust the material properties, including the absorption and scattering coefficients as well as the scattering direction.

Listing all of the bugs we encountered could take up an entire writeup, but we will describe one particularly nasty one. Each of the points in the octree was stored via pointer, and when we inserted the points into the octree, we inserted them via a line similar to “for (Point p : vec) {octree->insert(&p);}”. But this line actually inserted the pointer to the temporary for-loop variable ‘p’ rather than the pointer to the actual point stored in the vector. This took a long time to find, and made us realize we need to improve our C++ skills.

Lessons Learned

We were successful because we communicated often. Any time there was a bug or a clarification or a question, we immediately asked each other and quickly came to an answer. We met up to plan as well as to do important git merges and particularly difficult debugging.

Writing code is small part of making a project work. Debugging is the bulk of the time and effort.

Understanding the papers first made implementing the project much more straightforward. We were all on the same page about how to interpret the problems at hand.

Results

Forward scattering with glossy BRDF Jade dragon
Marble Max Planck
Lambertian Max Planck nose. Notice how the part of the nose not hit by light is completely black
Subsurface scattering Max Planck nose.
Full forward scattering, one channel
Medium forward scattering, one channel
Evenly distributed scattering, one channel

References

Jensen, Henrik Wann, and Juan Buhler. "A rapid hierarchical rendering technique for translucent materials." ACM Transactions on Graphics (TOG)21.3 (2002): 576-581.

Jensen, Henrik Wann, et al. "A practical model for subsurface light transport."Proceedings of the 28th annual conference on Computer graphics and interactive techniques. ACM, 2001.

Pharr, Matt, and Greg Humphreys. "Chapter 16." Physically Based Rendering: From Theory to Implementation. Amsterdam: Elsevier/Morgan Kaufmann, 2004. Print.

McGuire M. “Meshes.” Morgan McGuire's Computer Graphics Archive. 26 Aug 2011. Web. 28 April 2016. http://graphics.cs.williams.edu/data

"Optical Properties." mental ray Shader Reference. Autodesk, 2012. Web. 26 April 2016. http://docs.autodesk.com/MENTALRAY/2012/ENU/mental%20ray%203.9%20Help//files/shaders/node38.html

Contributions

Beth

Implementation for sampling surface points, surface point visualizer, single scattering, Mo() function in the octree, website design, debugging, rendering.

Trevor

Diffuse approximation for 1ch & RGB, Octree creation (point insertion & initializing hierarchy via averaging child values), rendering & much debugging of course.

Becca

Collada file editing and processing, octree visualizer, rendering and debugging.