Soft particles in WebGL and OpenGL ES

Particle systems are some of the easiest ways to make a 3D scene visually richer. In one of our Android applications, 3D Buddha Live Wallpaper is a fairly simple scene, which would be nice to add a little more detail. And when we thought about how to add variety to the image, the most obvious decision to fill the empty space around the Buddha statue was to add puffs of smoke or fog. Thanks to the use of soft particles, we have achieved a pretty good result. In this article, we will describe in detail the implementation of soft particles on pure WebGL / OpenGL ES without using third-party libraries and ready-made 3D engines.

The difference between the old and the updated application even exceeded our expectations. Simple smoke particles significantly improved the scene, made it richer and fuller. Puffs of smoke are additional details that โ€œcatch the eyeโ€, as well as a way to make the transition between the main objects and background smoother:



Soft particles


So what are these soft particles? You may remember that in many old games (Quake 3 and CS 1.6) the effects of smoke and explosions had very clear flat boundaries at the intersection of particles with a different geometry. All modern games no longer have such artifacts due to the use of soft particles - that is, particles with blurry, โ€œsoftโ€ edges around adjacent objects.

Rendering


What is required to make the particles soft? First, we need information about the depth of the scene in order to determine the intersection of particles with other objects and soften them. Then we need to determine the intersection of the scene and particle geometries by comparing the scene and particle depths in the fragment shader - intersections where the depths are the same. Next, we will look at the rendering process step by step. Both scene implementations for Android OpenGL ES and WebGL are the same, the main difference is only in resource loading. The implementation on WebGL is open source and you can get it here - https://github.com/keaukraine/webgl-buddha .

Depth map rendering


To render a scene depth map, first we need to create textures for the depth and color map and assign them to a specific FBO. This is done in the method initOffscreen () in the file BuddhaRenderer.js .
Actual rendering of scene objects to a depth map itself is performed in the drawDepthObjects () method , which draws a Buddha statue and a floor plane. However, there is one trick to improving performance. Since at this stage of rendering we do not need color information, but only depth, rendering to the color buffer is disabled by calling gl.colorMask (false, false, false, false) , and then turned on again by calling gl.colorMask (true, true, true, true) . GlcolorMask () functioncan turn on and off the recording of the red, green blue and alpha components separately, so in order to completely turn off the recording in the color buffer, we set all components to false and then turn them on for rendering on the screen, exposing them all to true. The result of rendering to the depth texture can be seen by uncommenting the call to drawTestDepth () in the drawScene () method . Since the texture of the depth map has only one channel, it is perceived as soon as the red, blue, and green channels are zero. A visualization of the depth map of our scene looks like this:


Particle rendering


The shader code used to render the particles is located in the file SoftDiffuseColoredShader.js . Let's see how it works.

The main idea of โ€‹โ€‹finding the intersection of the particle and scene geometries is to compare the value of the current fragment depth with the stored value from the depth map.

The first step in comparing depths is to linearize the depths, since the original values โ€‹โ€‹are exponential. This is done using the calc_depth () function . This technique is well described here - https://community.khronos.org/t/soft-blending-do-it-yourself-solved/58190 . For the linearization values, we need the Uniform variable vec2 uCameraRangewhose components x and y contain the values โ€‹โ€‹of the near and far clipping planes of the camera. Then the shader calculates the linear difference between the depth of the particle and the scene - this value is stored in the variable a. However, if we apply this value to the color of the fragment, we get particles that are too transparent - the color will fade linearly from any geometry behind the particle, and fade pretty quickly. This is how the visualization of the linear difference in depth looks (you can uncomment the corresponding line of code in the shader and see it):


To make the particles more transparent only near the intersection boundary (in the region of a = 0), we apply the GLSL smoothstep () function to the value of the variable a with the transition value from 0 to the coefficient specified in the uTransitionSize uniform , which determines the width of the transparent transition. If you want to learn more about the operation of the smoothstep () function and see a couple of interesting examples of its use, we recommend reading this article - http://www.fundza.com/rman_shaders/smoothstep/ . The final coefficient is stored in variable b. For the color mixing mode used in our scene, simply multiply the color of the particle taken from the texture by this coefficient; in other particle implementations, you may need to change, for example, only the alpha channel. If you uncomment the line of code in the shader to visualize this coefficient, the result will look like this:


Comparison of various values โ€‹โ€‹of the coefficient of "softness" of particles:

Sprite rendering optimization


In this scene, small specks of dust are drawn as point sprites (primitives like GL_POINTS ). This mode is convenient in that it automatically creates the finished square geometry of the particle with texture coordinates. However, they also have drawbacks that make their use inappropriate for large particles of fog clubs. First of all, they are cut off by the clipping planes of the camera matrix according to the coordinates of the center of the sprite. This leads to the fact that they abruptly disappear from view at the edges of the screen. Also, the square shape of the sprite is not very optimal for the fragment shader, since it is called in those places where the particle texture is empty, which causes a noticeable redrawing. We use an optimized particle shape - with cropped edges in those places where the texture is completely transparent:


Such particle models are commonly called billboard. Of course, they cannot be rendered as primitives of GL_POINTS , so each particle is drawn separately. This does not create a lot of drawElements calls , in the whole scene there are only 18 particles of fog. They should be placed in arbitrary coordinates, scaled but rotated in such a way as to always be perpendicular to the camera regardless of its position. This is achieved by modifying the matrix described in this answer on StackOverflow . There is a method calculateMVPMatrixForSprite () in the BuddhaRenderer.js file that creates MVP matrices for billboard models. It performs all the usual transformations of displacement and scaling and then usesresetMatrixRotations () to reset the rotation component of the model-view matrix before it is multiplied by the projection matrix. The resulting matrix performs a transformation as a result of which the model is always directed exactly at the camera.

Result


The final result can be seen live here .

You can learn and reuse source code from Github for your projects .

All Articles