CGI at home with Unreal Engine and iPhone

Hello everyone! My name is Vasily Mazalov, I work as a senior video editor at Pixonic. Our department is creating video creatives for marketing and the community: videos for pages in pages, review videos of game innovations and other content.

When I do not create creativity, I monitor the Internet for new formats and ways of presenting material to make our own content more diverse, interesting and attractive to new players.

A year ago, I came across the following video:

Watch the video


What do we see here? The guy put on a suit to capture the movement of the body (so far nothing unusual), hung an iPhone in front of him (but this is interesting) and thus broadcasts the animation of the character’s face and body directly in real time in Unreal Engine, and the result looks for such a simple implementation pretty high quality.

Cool idea, I thought. Then closed the video. And he continued to work further.

Six months later, training material on how to capture facial animation in Unreal Engine using an application on the iPhone turned out to be in the public domain. At the same time, I found out that a suit for capturing motion was purchased in our art department. Looked at its compatibility with UE: everything went well. It only remained to find an iPhone for further work, but nowadays there are even fewer problems with this.

Watch the video


There were a lot of questions. Before me was an uncultivated field of unexplored animation, the Unreal Engine, modeling of the human face and body, and other things completely remote from video editing, but at the same time, a great desire to realize what was intended.

The long process of studying various documentation has begun.

What happened as a result and how we achieved it, read on.


Face animation


To realize our idea, it was not profitable for us to use and sculpt the character from scratch: it would take a lot of time and would require complex and most often unjustified improvements. Therefore, we decided to use DAZ Studio: facial bones were originally laid there, allowing you to quickly create the necessary facial contractions and emotions, which the sculptor would spend much more time on. Yes, the models created in DAZ are far from a photorealistic image, but they were ideally suited for our goals.

In order to record facial animations, we only needed an iPhone with TrueDepth front-facing camera - that is, from iPhone X and above. It was this technology that read the face topology and transferred the necessary values ​​in Unreal already to our model.



Blend shapes are responsible for different facial expressions - 3D models of the same topology, that is, with the same number of vertices, but differing in shape. Face AR uses 51 blends, and thanks to the detailed Apple documentation that describes which specific blends are used in DAZ, we were able to make them quickly enough.

The set of emotions and blends in a 3D model looks something like this:


Blend shapes from the Internet


Our blends

So, first we get our Unreal Engine face for tests, then build the application and go back to Unreal to get the result.




Body animation


To create a body, it was necessary to take into account the specifics of software for working with a suit. We worked with Noitom's Perception Neuron 2.0 Motion Capture System motion capture suit. It costs about 2500 dollars. This is the cheapest costume on the market and not the best representative among analogues: it is very sensitive to electromagnetic radiation, which makes the coordinates of the sensors move if it is within the radius of active radiation, and it will be even more difficult to clean the animation. Fortunately, we just moved to another floor, and in a new place it was quite deserted, which means that electromagnetic radiation was reduced to a minimum - that is, it was ideal for us.



Why a suit? Ready-made animations from various libraries did not suit us, because our character should have a unique character and behavior, and the face and body should accurately reflect them. If we did the animation from scratch, it would take a month, or even two. Using motion capture equipment saved this time.

Unlike the face, the artists themselves painted the model of the body from scratch. Then it was necessary to make her rigging and skinning in Maya. After assembling the body, we start it in Unreal, there we collect everything for the mocap, record the animation, after which the result remains only to knead.



In order for the animation to be accurately transmitted, the improvements were minimal or to avoid them at all, and in order to broadcast the animation from the costume directly to the Unreal Engine, it was necessary to correctly set the bones and remove unnecessary values ​​from our model. Noitom has a rough 3D model for the Unreal Engine, using which, as a reference, we needed to refine our own model: put it in the T-pose, place palms and fingers in non-standard modeling positions and reset all values ​​to zero. It was very important that all the bones were without unnecessary turns, otherwise the program will multiply them, thereby greatly distorting the movement.

In total, it took about two hours to calibrate the suit and record the first videos. We set the settings in the Unreal Engine, recorded the animation of the body with all the necessary pauses according to the script, then recorded the animation of the face according to the movements of the body and the same script and got the result, which you will see in the following illustration.


After recording, the animation needed to be improved, so we set the animator the task of cleaning it. It took him three days to clean up two minutes of animation.


Then it took us about three weeks to the final version, and if we exclude the refinement of certain factors both in the face model and in the body, this period can be reduced by another week.


Why do we use it?


Let's take a break from the CGI process and talk about what the objectives of the project were.

At that moment, when I delved into this topic and collected the information necessary for work, pilots appeared in our game.

Usually, when new content comes out, either an off-screen voice speaks about it, or the developers themselves, or the information simply somehow arrives through the gameplay. Now we have the opportunity to create a character, properly prepare it, assemble from high-quality assets the locations in which it will be located, and through this hero communicate with players: from story and review videos to live broadcasts.

For the first time in a game about robots, live characters appeared, and after them stories that they can tell about themselves and the world. And I thought it would be cool if it were possible to collect gameplay cinematics with characters that would immerse players in the game world as fast as we do videos on the engine.



Together with the community department, we began to come up with an image of the character, how he might look and what his story would be. Our senior community manager wrote the script, which we subsequently developed in terms of saving time and simplifying production.


In this video you see almost all the tests that we have done with facial animation and body animation. Since they have different specifications, they had to be tested in turn and only mixed at the end. For body animation tests, a costume model was taken from the CGI trailer for the new release:


Well, now let's show what we got as a result:



Total


Having a motion capture suit, iPhone, 3D-model and Unreal Marketplace with a huge selection of free quality assets, we can collect interesting stories for our players in just a couple of weeks. We also gained experience and understanding of how to quickly create a new character and, at the stage of its creation, take into account all the features of production in order to achieve the best result in a short time.

Why didn’t we aim to achieve the quality of cool cinematics like Blizzard? For community-based and marketing content, current quality is enough to give our users a new perspective on the gaming world. However, while there is still no need to improve the quality of the clips, we are always in search of new solutions.

All Articles