Fast and easy volume rendering algorithm


I recently wrote a small ShaderToy that does simple volumetric rendering, and then decided to publish a post explaining its work. The interactive ShaderToy itself can be viewed here . If you are reading from a phone or laptop, I recommend watching this quick version. I included code snippets in the post that will help you understand ShaderToy’s performance at a high level, but they don’t have all the details. If you want to dig deeper, I recommend checking with the ShaderToy code.

My ShaderToy had three main tasks:

  1. Real time execution
  2. Simplicity
  3. Physical correctness (... or something like that)

I will start with this blank code scene. I will not go into details of the implementation, because it is not very interesting, but I will briefly tell you where we start:

  1. Ray tracing of opaque objects. All objects are primitives with simple intersections with rays (1 plane and 3 spheres)
  2. To calculate the lighting, Phong shading is used, and in three spherical light sources, a custom light attenuation coefficient is used. Rays of shadows are not required, because we illuminate only the plane.

Here's what it looks like:

ShaderToy screenshot

We will render the volume as a separate passage that mixes with an opaque scene; this is similar to how all real-time rendering engines individually process opaque and translucent surfaces.

Part 1: simulate volume


But first, before we can start volumetric rendering, we need this same volume! To simulate volume, I decided to use signed distance functions (SDF). Why precisely the functions of distance fields? Because I'm not an artist, but they allow you to create very organic forms in just a few lines of code. I will not talk in detail about the functions of distances with a sign, because Inigo Kiles has already explained them wonderfully. If you are curious, then there is a great list of different func- tions of sign distances and modifiers. And here is another article about these raymarching SDF.

Let's start with a simple one and add a sphere here:

ShaderToy screenshot

Now we will add another sphere and use smooth conjugation to merge the distance functions of the spheres. This code I took directly from the Inigo page, but for clarity, I will insert it here:

// Taken from https://iquilezles.org/www/articles/distfunctions/distfunctions.htm
float sdSmoothUnion( float d1, float d2, float k ) 
{
    float h = clamp( 0.5 + 0.5*(d2-d1)/k, 0.0, 1.0 );
    return mix( d2, d1, h ) - k*h*(1.0-h); 
}

Smooth pairing is an extremely powerful tool, because you can get something quite interesting by simply combining it with a few simple shapes. Here's what my many smoothly conjugate spheres look like:

ShaderToy screenshot

So, we got something teardrop-shaped, but we need something more like a cloud than a drop. A great feature of SDF is how easy it is to distort the surface by simply adding a bit of noise to the SDF. So let's add some fractal Brownian motion (fBM) on top of the noise, using the position to index the noise function. Inigo Kiles also covered this topic in a great article on fBM noise. Here's what the image with fBM noise superimposed will look like:

ShaderToy screenshot

Fine! Thanks to fBM noise, the object suddenly began to look much more interesting!

Now we need to create the illusion that the volume interacts with the plane of the earth. To do this, I added a distance of the signed plane slightly below the ground plane and reused the combination of smooth pairing with a very aggressive pairing value (parameter k). After that, we got this picture:

ShaderToy screenshot

The final touch will be the change in the xz index of fBM noise over time, so that the volume looks like swirling fog. On the move, it looks very good!

ShaderToy screenshot

Great, we got something like a cloud! The SDF calculation code is also quite compact:

float QueryVolumetricDistanceField( in vec3 pos)
{    
    vec3 fbmCoord = (pos + 2.0 * vec3(iTime, 0.0, iTime)) / 1.5f;
    float sdfValue = sdSphere(pos, vec3(-8.0, 2.0 + 20.0 * sin(iTime), -1), 5.6);
    sdfValue = sdSmoothUnion(sdfValue,sdSphere(pos, vec3(8.0, 8.0 + 12.0 * cos(iTime), 3), 5.6), 3.0f);
    sdfValue = sdSmoothUnion(sdfValue, sdSphere(pos, vec3(5.0 * sin(iTime), 3.0, 0), 8.0), 3.0) + 7.0 * fbm_4(fbmCoord / 3.2);
    sdfValue = sdSmoothUnion(sdfValue, sdPlane(pos + vec3(0, 0.4, 0)), 22.0);
    return sdfValue;
}

This is just rendering an opaque object. We need a beautiful magnificent fog!

How do we render it in the form of volume, and not an opaque object? Let's first talk about the physics we simulate. Volume is a huge number of particles in a certain area of ​​space. And when I say “huge”, I mean “HUGE”. So much so that modeling each of these particles today is an impossible task, even for offline rendering. Good examples of this are fire, fog, and clouds. Strictly speaking, everything is volume, but for the sake of speed of calculations it is easier to close our eyes to this and pretend that it is not. We represent the accumulation of these particles as density values ​​that are usually stored in some kind of 3D grid (or something more complex, for example, in OpenVDB).

When light passes through a volume, a pair of phenomena can occur when light collides with a particle. It can either scatter and go in the other direction, or part of the light can be absorbed by the particle and dissolve. To comply with the real-time execution requirement, we will perform what is called single scattering. This means the following: we will assume that light is scattered only once, when the light collides with a particle and flies towards the camera. That is, we will not be able to simulate the effects of multiple scattering, for example, fog, in which objects at a distance usually look more vague. But for our system this is quite enough. Here's what single scattering looks like when raymarching:

ShaderToy screenshot

The pseudocode for it looks something like this:

for n steps along the camera ray:
   Calculate what % of your ray hit particles (i.e. were absorbed) and needs lighting
   for m lights:
      for k steps towards the light:
         Calculate % of light that were absorbe in this step
      Calculate lighting based on how much light is visible
Blend results on top of opaque objects pass based on % of your ray that made it through the volume

That is, we are dealing with calculations with complexity O (n * m * k). So the GPU will have to work hard.

We calculate the absorption


First, let's look at the absorption of light in volume along the beam of the camera (i.e., let's not perform raymarching in the direction of the light sources yet). To do this, we need two actions:

  1. Perform raymarching inside the volume
  2. Calculate absorption / lighting at each step

To calculate how much light is absorbed at each point, we use the Bouguer – Lambert – Beer law , which describes the attenuation of light when passing through a material. The calculations are surprisingly simple:

float BeerLambert(float absorptionCoefficient, float distanceTraveled)
{
    return exp(-absorptionCoefficient * distanceTraveled);
}

The absorption coefficient is a material parameter. For example, in a transparent volume, for example, in water, this value will be low, and for something thicker, for example, milk, the coefficient will be higher.

To perform volume raymarching, we simply take steps of a fixed size along the beam and get absorption at every step. You may not understand why to take fixed steps instead of something faster, for example, tracing a sphere, but if you remember that the density within the volume is heterogeneous, then everything becomes clear. Below is the raymarching and accumulation absorption code. Some variables are outside the scope of this code snippet, so check out the full implementation in ShaderToy.

float opaqueVisiblity = 1.0f;
const float marchSize = 0.6f;
for(int i = 0; i < MAX_VOLUME_MARCH_STEPS; i++) {
	volumeDepth += marchSize;
	if(volumeDepth > opaqueDepth) break;
	
	vec3 position = rayOrigin + volumeDepth*rayDirection;
	bool isInVolume = QueryVolumetricDistanceField(position) < 0.0f;
	if(isInVolume) 	{
		float previousOpaqueVisiblity = opaqueVisiblity;
		opaqueVisiblity *= BeerLambert(ABSORPTION_COEFFICIENT, marchSize);
		float absorptionFromMarch = previousOpaqueVisiblity - opaqueVisiblity;
		for(int lightIndex = 0; lightIndex < NUM_LIGHTS; lightIndex++) {
			float lightDistance = length((GetLight(lightIndex).Position - position));
			vec3 lightColor = GetLight(lightIndex).LightColor * GetLightAttenuation(lightDistance);  
			volumetricColor += absorptionFromMarch * volumeAlbedo * lightColor;
		}
		volumetricColor += absorptionFromMarch * volumeAlbedo * GetAmbientLight();
	}
}

And here is what we get with this:

ShaderToy screenshot

Looks like candy floss! Perhaps for some effects this will be enough! But we lack self-shadowing. Light reaches all parts of the volume equally. But this is not physically correct, depending on the size of the volume between the rendered point and the light source, we will receive a different amount of incoming light.

Self shadowing


We have already done the most difficult. We need to do the same thing as we did to calculate the absorption along the beam of the camera, but only along the beam of light. The code for calculating the amount of light reaching each point will essentially be a repetition of the code, but duplicating it is easier than hacking HLSL to get the recursion we need. So here is what it will look like:

float GetLightVisiblity(in vec3 rayOrigin, in vec3 rayDirection, in float maxT, in int maxSteps, in float marchSize) {
    float t = 0.0f;
    float lightVisiblity = 1.0f;
    for(int i = 0; i < maxSteps; i++) {                       
        t += marchSize;
        if(t > maxT) break;

        vec3 position = rayOrigin + t*rayDirection;
        if(QueryVolumetricDistanceField(position) < 0.0) {
            lightVisiblity *= BeerLambert(ABSORPTION_COEFFICIENT, marchSize);
        }
    }
    return lightVisiblity;
}

Adding self-shadowing gives us the following:

ShaderToy screenshot

Soften the edges


At the moment, I already quite like our volume. I showed him to the talented leader of The Coalition's VFX department, James Sharp. He immediately noticed that the edges of the volume looked too sharp. And this is absolutely true - objects like clouds are constantly scattered in the space surrounding them, so their edges mix with the empty space around the volume, which should lead to the creation of very smooth edges. James offered me a great idea - to reduce the density depending on how close we are to the edge. And since we are working with distance functions with a sign, it is very easy to implement! So let's add a function that can be used to request density at any point in the volume:

float GetFogDensity(vec3 position)
{   
    float sdfValue = QueryVolumetricDistanceField(position)
    const float maxSDFMultiplier = 1.0;
    bool insideSDF = sdfDistance < 0.0;
    float sdfMultiplier = insideSDF ? min(abs(sdfDistance), maxSDFMultiplier) : 0.0;
    return sdfMultiplier;
}

And then we simply collapse it into the absorption value:

opaqueVisiblity *= BeerLambert(ABSORPTION_COEFFICIENT * GetFogDensity(position), marchSize);

And here is what it looks like:

ShaderToy screenshot

Density function


Now that we have the density function, you can easily add a little noise to the volume to give it extra details and splendor. In this case, I just reuse the fBM function that we used to adjust the volume shape.

float GetFogDensity(vec3 position)
{   
    float sdfValue = QueryVolumetricDistanceField(position)
    const float maxSDFMultiplier = 1.0;
    bool insideSDF = sdfDistance < 0.0;
    float sdfMultiplier = insideSDF ? min(abs(sdfDistance), maxSDFMultiplier) : 0.0;
   return sdfMultiplier * abs(fbm_4(position / 6.0) + 0.5);
}

And so we got the following:

ShaderToy screenshot

Opaque self-shadowing


The volume already looks pretty pretty! But a little light still leaks through it. Here we see how the green color seeps where the volume should definitely absorb it:

ShaderToy screenshot

This happens because opaque objects are rendered before the volume is rendered, so they do not take into account the shading caused by the volume. This is quite simple to fix - we have a GetLightVisiblity function that can be used to calculate the shading, so we just need to call it to illuminate an opaque object. We get the following:

ShaderToy screenshot

In addition to creating beautiful multi-colored shadows, this helps to improve the shadows and build volume into the scene. In addition, thanks to the smooth edges of the volume, we get soft shadows, despite the fact that, strictly speaking, we work with point sources of illumination. That's all! Much more can be done here, but it seems to me that I have achieved the visual quality I need, while preserving the relative simplicity of the example.

Optimizations


In the end, I will briefly list some possible optimizations:

  1. Before performing raymarching in the direction of the light source, it is necessary to check by the value of the extinction of the light whether a significant amount of this light really reaches the point in question. In my implementation, I look at the brightness of the light, multiplied by the albedo of the material, and make sure that the value is large enough for raymarching to be performed.
  2. , , raymarching
  3. raymarching . , . , raymarching , .


That's all! Personally, I was surprised that you can create something quite physically correct in such a small amount of code (about 500 lines). Thank you for reading, I hope it was interesting.

And one more note: here's a fun change - I added light emission based on the SDF distance to create an explosion effect. After all, explosions are never many.

ShaderToy screenshot

All Articles