The implementation of the watercolor effect in games

image

Introduction


When in January 2019 we started discussing our new tint game . , we immediately decided that the watercolor effect would be the most important element. Inspired by this Bulgari advertisement , we realized that the implementation of watercolor painting should be consistent with the high quality of the remaining resources that we planned to create. We found an interesting article by researchers from Adobe (1) . The watercolor technique described in it looked wonderful, and due to its vector (rather than pixel) nature, it could work even on weak mobile devices. Our implementation is based on this study, we changed and / or simplified parts of it because our performance requirements were different. tint .- this is a game, therefore, in addition to the drawing itself, we needed to render the entire 3D environment and execute the game logic in one frame. We also sought to ensure that the simulation was performed in real time and the player immediately saw what was drawn.


Water-color simulation in tint.

In this article, we will share the individual details of the implementation of this technique in the Unity game engine and talk about how we adapted it to work seamlessly on low-end mobile devices. We will talk more about the main stages of this algorithm, but without demonstrating the code. This implementation was created in Unity 2018.4.2 and later updated to version 2018.4.7.

What is tint.?


Tint . - This is a puzzle game that allows the player to complete the levels, mixing the colors of watercolors to match the colors of origami. The game was released in the fall of 2019 at Apple Arcade for iOS, macOS, and tvOS.


Screenshot tint.

Requirements


The technique described in my article can be divided into three main stages performed in each frame:

  1. Generate new spots based on player input and add them to the spots list
  2. Paint simulation for all spots on the list
  3. Spot rendering

Below we will talk in detail about how we implemented each of the stages.

We aimed to reach 60 FPS, that is, these stages and all the logic described below are performed 60 times per second.

Getting input


In each frame, we transform the player’s input (depending on the platform, it can be a touch, the position of the mouse or the virtual cursor) into a splatData structure that contains the position, motion vector, color and pressure (2). First, we check the player’s swipe length on the screen and compare it with a given threshold value. With short swipes, we generate one spot per frame at the input position. In the opposite case, we fill the distance between the start and end points of the player’s swipe with new spots created with a predetermined density (this ensures a constant paint density regardless of the swipe speed). The color indicates the current paint used, and the slope of the movement indicates the direction of the swipe. Created new spots are added to a collection called splatList, which also contains all previously created spots. It is used to simulate and render paint in the following steps. Each individual spot denotes a “drop” of paint that needs to be rendered - the main building block of watercolor painting. The finished watercolor drawing will be the result of rendering tens / hundreds of intersecting spots. In addition, the value of the lifetime (in frames) is assigned to the newly created spot, which determines how long the spot can be simulated.


An example of interpolation of long swipe spots. Hollow circles indicate spots created at regular intervals.

Canvas


Like real paint, we need a canvas. To implement it, we created a limited area in 3D space that looks like a sheet of paper. The player’s input coordinates and all other operations, such as rendering a mesh, are recorded in the canvas space. Similarly, the size in pixels of any buffer used to simulate drawing depends on the size of the canvas. The term “canvas” as used in this article is in no way associated with the Canvas class from Unity UI.


The green rectangle shows the canvas area in the game

Spot


Visually, the spot is represented by a round mesh, the edge of which consists of 25 vertices. You can perceive it as a “drop” that a wet brush leaves on a piece of paper if you touch it for a very short moment. We add a small random offset to the position of each vertex, which ensures the unevenness of the edges of the spots of paint.


Examples of mesh meshes.

For each vertex, we also store the outward velocity vector, which is then used in the simulation phase. We generate several such meshes with small variations between them forms and store their data in skriptuemy object ( a scriptable object ). Each time a player draws a spot in real time, we assign him a mesh randomly selected from this set. It is worth mentioning that at different screen resolutions the canvas has a different size in pixels. So that on all devices the coefficient of the size of the spots is the same, when the game starts, we change the scale in accordance with the size of the canvas.


An example of spot vectors stored with new spot data.

When a spot mesh is generated, we also save its “wetting area”, which defines a set of pixels that are inside the original spot borders. The wetting area is used to simulate advection . During the execution of the application at the time of creating each new spot, we mark the canvas under it as wet. When simulating the movement of paint, we allow it to “spread” over those areas of the canvas that have already become wet. We store the moisture content of the canvas in the global wetmap buffer , which is updated as each new spot is added. In addition to participating in the mixing of two colors, advection plays an important role in the final appearance of the paint stroke itself.


Wetmap filling , pixels inside the spot shape (green circle) mark the wetmap buffer (grid) as wet (green). The wetmap buffer itself has a much higher resolution.

In addition, each spot also contains an opacity value , which is a function of its area; it represents the effect of storing pigment (a constant amount of pigment in the spot). When the size of a spot increases during simulation, its opacity decreases, and vice versa.


An example of paint without advection (left) and with it (right).


Examples of paint advection.

Simulation cycle


After the player’s input in the current frame is received and converted to new spots, the next step is to simulate the spots to simulate the spreading of watercolors. At the beginning of this simulation, we have a list of spots that need to be updated, and an updated wetmap .

In each frame, we go around the list of spots and change the positions of all the vertices of the spots using the following equation:


where: m is the new motion vector, a is the constant correction parameter (0.33), b is the motion slope vector = normalized direction of the player’s swipe, multiplied by 0.3, cr is the scalar value of the canvas roughness = Random.Range (1,1 + r), r is the global roughness parameter, for standard paint we set it to 0.4, v is the velocity vector created in advance with the spot mesh, vm is the velocity factor, the scalar value that we use locally in some situations to accelerate advection, x (t + 1) - potential new vertex position, x (t) - current vertex position, brIs the branch roughness vector = (Random.Range (-r, r), Random.Range (-r, r)), w (x) is the wetting value in the wetmap buffer.

The result of such equations is called biased random walk , it imitates the behavior of particles in real watercolor paint. We are trying to move each vertex of the spot outward from its center ( v ), adding randomness. Then the direction of movement changes slightly with the direction of the stroke ( b ) and is again randomized by another roughness component ( br ). Then this new vertex position is compared with a wetmap . If the canvas in the new position was already wet (value in the wetmap buffergreater than 0), then we give the vertex a new position x (t + 1) , otherwise we do not change its position. As a result, the paint will spread only in those areas of the canvas that were already wet. At the last stage, we recalculate the spot area, which is used in the rendering cycle to change its opacity.


Microscale example of advection simulation between two active spots of paint.

Rendering Cycle - Wet Buffer


After recounting the spots, you can start rendering them. At the output after the emulation stage, the spot meshes often turn out to be deformed (for example, intersections occur), therefore, for their correct rendering without additional costs for repeated triangulation, we use a solution with two-pass stencil buffer. The Unity Graphics drawing interface is used to render spots , and the rendering cycle is performed inside the Unity OnPostRender method . Spot meshes are rendered to render texture ( wetBuffer ) using a separate camera. At the beginning of the cycle, wetBuffer is cleared and set as a render target using Graphics.SetRenderTarget (wetBuffer) . Next for each active spot from splatList we execute the sequence shown in the following diagram:


Rendering cycle diagram.

We start by cleaning the stencil buffer before each spot so that the state of the stencil buffer of the previous spot does not affect the new spot. Then we select the material used to draw the spot. This material is responsible for the color of the spot, and we select it based on the color index stored in splatData when the player drew the spot. Then we change the color opacity (alpha channel) based on the area of ​​the spot mesh calculated in the previous step. The rendering itself is performed using a two-pass stencil buffer shader. In the first pass (Material.SetPass (0)) we pass the original spot mesh to record the coordinates in which the mesh is filled. With this pass ColorMaskassigned a value of 0, so the mesh itself is not rendered. In the second pass (Material.SetPass (1)) we use the quadrilateral described around the spot mesh. We check the value in the stencil buffer for each pixel of the quadrilateral; if the value is one, the pixel is rendered, otherwise it is skipped. As a result of this operation, we render the same shape as the spot mesh, but it certainly will not contain unwanted artifacts, for example, self-intersections.


The procedure for performing the double stencil buffer technique (from left to right). Note that this stencil buffer has a much higher resolution than shown, so it can maintain its original shape with great accuracy.


An example of three intersecting spots rendered in the traditional way, which led to the appearance of artifacts (left), and using the two-pass stencil buffer technique with the elimination of all artifacts (right).

After rendering all the spots in wetBuffer, it is displayed in the game scene. Our canvas uses a makeshift shader combining a wetBuffer , a diffuse paper map, and a paper normal map.


Canvas shader: only wetBuffer (left), added paper texture (center), normal map added (right).

The game supports a mode for people with color blindness, in which separate patterns are superimposed on top of the paint. To achieve this, we changed the material of the stains by adding the texture of the pattern with tiling. Patterns follow the rules of mixing the colors of the game, for example, blue (bars) + yellow (circles) give green (circles in the bars) at the intersection. To seamlessly blend patterns, they must be rendered in the same UV space. We adjust the UV coordinates of the quadrilateral used in the second pass of the stencil buffer, dividing the x and y positions (which are specified in the canvas space) by the width and height of the canvas. As a result, we get the correct values ​​of u, v in the space from 0 to 1.


An example of color blindness patterns.

Optimization - dried spots buffer


As mentioned above, one of our tasks was to support low-power mobile devices. Spot rendering turned out to be the bottleneck of our game. Each spot requires three draw calls (call two passes + clear the stencil buffer), and since the paint line contains tens or hundreds of spots, the number of draw calls increases rapidly and leads to a drop in frame rate. To cope with this, we applied two optimization techniques: firstly, the simultaneous drawing of all “dried” spots in dryBuffer , and secondly, the local acceleration of the drying of the spots after reaching a certain number of active spots.

dryBufferIs an additional render texture added to the rendering cycle. As mentioned earlier, each spot has a lifetime (in frames), which decreases with each frame. After the lifespan reaches 0, the stain is considered “dried up”. Dry spots are no longer simulated, their shape does not change, and therefore they do not need to be rendered again in each frame.


DryBuffer in action; the gray spots show the spots copied to dryBuffer.

Each spot whose lifetime reaches 0 is removed from the splatList and “copied” to dryBuffer . During the copy process, the rendering cycle is reused, and this time dryBuffer is set as the target render texture .

Proper mixing between wetBuffer and dryBuffer cannot be achieved by simply overlapping the buffers in the canvas shader, because the render texture of the wetBuffer buffercontains spots already rendered with alpha value (which is equivalent to premultiplied alpha). We circumvented this problem by adding one step to the start of the rendering cycle before iteratively traversing the spots. At this point, we render a quadrilateral the size of a camera trimming pyramid that displays dryBuffer . Thanks to this, any stain that is rendered in wetBuffer will already be mixed with dry, previously painted stains.


A mixture of wet and dried spots.

The dryBuffer buffer accumulates all “dried” spots and is not cleared between frames. Therefore, all memory that is associated with expired stains can be cleared after they are “copied” to the buffer.


Thanks to the optimization with dryBuffer , we no longer have limits on the amount of paint a player can apply to the canvas.

Using the dryBuffer technique separately allows the player to draw with an almost infinite amount of paint, but does not guarantee consistent performance. As mentioned above, the paint stroke has a constant thickness , which is achieved by drawing using interpolation of many spots between the start and end points of the swipe. In the case of many fast and long swipes, the player can generate a large number of active spots. These spots will be simulated and rendered over the number of frames specified by their life span, which ultimately leads to lower frame rates.

To ensure a stable frame rate, we changed the algorithm so that the number of active spots was limited by a constant value of maxActiveSplats . All spots exceeding this value instantly “dry out”. This is realized by reducing the life span of the oldest active spots to 0, which is why they are copied to the dried spots buffer earlier. Since when we shorten the life we ​​get a spot in the incomplete state of the simulation (which will look quite interesting), at the same time we increase the speed of spreading of the paint. Due to the increase in speed, the spot reaches almost the same size as at normal speed with a standard lifespan.


Demonstration of maximum 40 (top) and 80 (bottom) active spots. Dried spots copied in dryBuffer are shown in gray. The value indicates the “amount” of paint that can be simulated at the same time.

The value of maxActiveSplats is the most important performance parameter, it allows us to precisely control the number of draw calls that we can allocate to watercolor rendering. We set it at startup, based on the platform and device power. You can also change this value during application execution if a decrease in frame rate is detected.

Conclusion


The implementation of this algorithm has become an interesting and challenging task. We hope readers enjoyed the article. You can ask questions in the comments to the original . If you want to appreciate our watercolor in action, then try playing tint. on the Apple Arcade .


Screenshot of a game running on Apple TV

(1) S. DiVerdi, A. Krishnaswamy, R. MÄch and D. Ito, “Painting with Polygons: A Procedural Watercolor Engine,” in IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 5, pp. 723–735, May 2013. doi: 10.1109 / TVCG.2012.295

(2) Pressure is only taken into account when drawing the Apple Pencil on an iPad.

All Articles