Unity Background Image Blur



One of the tasks that a Unity application developer may encounter is to put the current image in the background in order to switch the user's attention to something new, such as a menu or message that appears. The article tells about the experience of solving this problem by a developer who has basic knowledge in Unity, without using external resources or an additional Unity license. I hope this material will be useful to those who faced a similar problem and did not find effective solutions at different stages of it.

Highlighting important interface elements (and removing unnecessary ones) is one of the main tools for improving user experience. This is especially pronounced when working with mobile applications. This switching of attention can be done in different ways - smoothly running away the buttons beyond the edges of the screen, darkening the background, dynamic behavior, etc. These include blurring the image. This effect, on the one hand, reduces the contrast of background elements, and it becomes harder for the eye to catch on. On the other hand, blurring has a subconscious effect when the image becomes out of focus, and we perceive it differently. This approach is often used in games, and from well-known examples of games on Unity, we can name the moments when choosing the menu of Hearthstone quests orduel completion screen .

Now imagine that we want to do the same or similar in our favorite application. Nor are we without a budget and have little experience with Unity.

Part One - Shader


The first thought that arose was that everything should already be realized. Surely using a shader. And it certainly should be in the Unity box. A quick search and installation attempt showed that there is such a shader in Unity, but as it turns out, it is part of Standard Assets, and it is not available for a free license.

But this is blur! A basic effect that is mathematically simple and should also be easily implemented and integrated into the project. Even if we don’t know anything about shaders. Next, go to the Internet, and the search quickly showed that there are source codes of ready-made shaders for the effect, and even different ones.

For the first implementation, this shader was taken . To test it, just add the shader as a source file to the project, associate it with the material, bind the material to Image.

Part Two - User Interface


If we care about the user of the application, then it is necessary to pay due attention to the user interface. If you just add a blur effect to the background and forget about it, then this is not all consistent with the internal principles of creating a product.

As a result, for further actions, you need to touch what happened, and then think and adjust in such a way as to cause the necessary perception of the user. This point is highly context sensitive. Depending on the contrast of the image, the variety of colors and other factors, various values ​​and additional tools can be selected. In our case, at this stage, a good result was obtained with a blur radius of 25 (shader parameter) and an additional Transparent of 70% (outside the shader through a separate Image). However, this is only the final background image. The transition itself, according to the feelings on the phone, was too sharp.

The installed shader in the image is calculated on each frame, and therefore, in order to make smooth switching, it is enough to dynamically change the blur radius parameter and transparency. You can organize this in different ways, but in essence, processing is an update in Update handlers depending on the time. The transition itself in the final version of the application is 0.2 seconds, but as it turns out, it is important for the perception of the user.

Part Three - Productivity


In recent months, I have heard complaints from several different users (not even programmers) that game programs often use all available computer resources idle and without brakes. For example, this can be expressed in the maximum use of the video card in the case of a program minimized to tray, or regardless of everything and the constant loading of one processor thread. The reasons for this are intuitive - the income of the developer or publisher does not depend on optimization (people do not pay attention to such things when buying), and ordinary developers are most likely more concerned about local KPIs and the need for delay. However, in practice, I would not want to fall into this category.

After completing the previous stage, at the next launch with a background blur effect, after holding the phone in your hand for some time, a feeling of warming appeared. If you spend more time on the menu, then everything becomes much worse. And even if the user in the menu presumably and most likely does not spend much time, then I would not want to drop the battery at all.

The analysis showed that the shader chosen above has asymptotic complexity O (r 2 ) depending on the selected radius. In other words, the number of calculations per point becomes 625 times larger if you increase the blur radius from 1 to 25. In this case, the calculations occur on each frame.

The first step in the solution was the idea that not all shaders are equally useful, and the blur effect can be implemented in different ways. To do this, you can take a separate blur, the essence of which is to blur first only the lines horizontally, and then only the lines vertically. As a result, we obtain O (r) , and ceteris paribus the complexity drops by an order of magnitude. An additional way would be to take a smaller mipmap, which already takes on some of the blur work. This shader

was taken as the basis. However, its blurring effect turned out to be insufficient, and with high contrast of the graphic the image became “chipped”. To obtain better effects, the distribution has been changed (GRABPIXEL elements). If in the shader, from 0.18 to 0.05, radius 4, in our version, from 0.14 to 0.03, radius 6 (ess, but the sum of all should be 1).

Thus, the processing complexity was reduced by 1-2 orders of magnitude. But this is not necessary to stop.

Part Four - Static Background


Nevertheless, if we leave the shader on the existing image, then it is constantly in work, doing the same calculations on each frame. If something happens on the background, then we need to do it. But this is completely optional if the background is static. Thus, after dynamic blur, you can remember the result, put it on the background and turn off the shader.

Next, let's try with code snippets. Preparing the destination texture for the entire screen may look like this:

_width = Screen.width / 2;
_height = Screen.height / 2;
_texture = new Texture2D(_width, _height);

var fillColor = Color.white;
var fillColorArray = _texture.GetPixels();
for (var i = 0; i < fillColorArray.Length; ++i)
{
	fillColorArray[i] = fillColor;
}
_texture.SetPixels(fillColorArray);
_texture.Apply();

_from.GetComponent<Image>().sprite = Sprite.Create(_texture, new Rect(0, 0, _texture.width, _texture.height), new Vector2(0.5f, 0.5f));

That is, a texture is prepared here that is 2 times smaller horizontally and vertically from the screen. This can be done almost always, since we know that the screen sizes of devices are a multiple of 2, and if done, this will reduce the size of the texture and facilitate blurring.

RenderTexture temp = RenderTexture.GetTemporary(_width, _height);
Graphics.Blit(_from_image.mainTexture, temp, _material);
RenderTexture.active = temp;

_texture.ReadPixels(new Rect(0, 0, temp.width, temp.height), 0, 0, false);
_texture.Apply();

_to_image.sprite = Sprite.Create(_texture, new Rect(0, 0, _texture.width, _texture.height), new Vector2(0.5f, 0.5f));
RenderTexture.ReleaseTemporary(temp);

To capture an image with mainTexture using a shader (on _material) Graphics.Blit is used. Next, we memorize the result in the prepared texture and place it in the target Image.

Everything would be fine; in practice, he was faced with such an effect that, depending on the device, the background image turns upside down. The reason after some study of the issue and debugging becomes clear - the difference in coordinate systemsDirect3D and OpenGL, which Unity cannot hide. At the same time, our shader detects UV and correctly processes it, and the inversion occurs already in the Unity environment (ReadPixels). There are a lot of tips on the net, such as “if you have turned upside down, turn it over yourself”, or “change the UV sign”. Using the UNITY_UV_STARTS_AT_TOP macro did not help to get a good generalization for all devices under test. In addition, for example, we encountered such a case that if you prepare the assembly for emulation in Xcode on the iPhone, and Unity did not use Metal, but only OpenGLES, then you need to intercept such cases as emulation of a device running on other software.

After trying a number of options, I settled on two tablets. The first one is forcing camera rendering(setting forceIntoRenderTexture at the time of image capture). The second one is the determination of the type of the graphic system on the fly through SystemInfo.graphicsDeviceType, which makes it possible to define OpenGL-like or Direct3D-like (the first group is OpenGLES2, OpenGLES3, OpenGLCore and Vulkan). Further, in our implementation for Direct3D-like, we need to turn over, which is done procedurally (for example, like this ).

Conclusion


I hope this article will be useful to someone in overcoming an unknown rake, improving users' lives and understanding Unity. If you look at the effect in combat use, then in Unity it looks like this . In the application, the sensations are somewhat different, but this animation can give a sufficient idea.

All Articles