Minecraft Creation in One Week in C ++ and Vulkan

I set myself the task of recreating Minecraft from scratch in one week using my own engine in C ++ and Vulkan. I was inspired by Hopson , who did the same with C ++ and OpenGL. In turn, he was inspired by Shane Beck , who was inspired by Minecraft, the source of inspiration for which was Infiniminer, the creation of which, presumably, was inspired by real mining.


The GitHub repository for this project is here . Each day has its own git tag.

Of course, I did not plan to literally recreate Minecraft. This project was supposed to be an educational one. I wanted to learn about using Vulkan in something more complicated than vulkan-tutorial.com or the demo of Sasha Willem. Therefore, the main emphasis is on the design of the Vulkan engine, and not on the design of the game.

Tasks


Development on Vulkan is much slower than on OpenGL, so I couldn’t incorporate many of the features of this Minecraft into the game. There are no mobs, no crafting, no red stone, no block physics, etc. From the very beginning, the objectives of the project were as follows:

  • Creating a terrain rendering system
    • Mashing
    • Lighting
  • Creating a terrain generator system
    • Relief
    • Trees
    • Biomes
  • Adding the ability to change terrain and move blocks

I needed to find a way to implement all this without adding a GUI to the game, because I could not find any GUI libraries that work with Vulkan and were easy to integrate.

Libraries


Of course, I was not going to write a Vulkan application from scratch. To speed up the development process, I will use ready-made libraries whenever possible. Namely:


Day 1


On the first day, I prepared a Vulkan boilerplate and an engine skeleton. Most of the code was a boilerplate and I could just copy it from vulkan-tutorial.com . It also included a trick with storing vertex data as part of a vertex shader. This meant that I didn't even have to tune the memory allocation. Just a simple conveyor that can do only one thing: draw a triangle.

The engine is simple enough to support the renderer of triangles. It has one window and a game loop to which systems can be connected. The GUI is limited by the frame rate displayed in the window title.

The project is divided into two parts: VoxelEngineand VoxelGame.


Day 2


I integrated the Vulkan Memory Allocator library. This library takes care of most of the Vulkan memory allocation boilerplate: memory types, device memory heaps, and secondary allocation.

Now that I had a memory allocation, I created classes for meshes and vertex buffers. I changed the renderer of the triangles so that it uses the class of meshes, and not the arrays built into the shader. Currently, mesh data is transferred to the GPU by manually rendering the triangles.


Little has changed

Day 3


I added a graph rendering system. This post was taken as a basis for creating this class , but the class is very simplified. My rendering graph contains only the essentials for handling synchronization with Vulkan.

The rendering graph allows me to set nodes and edges. Nodes are the work performed by the GPU. Ribs are data dependencies between nodes. Each node receives its own instruction buffer, in which it writes. The graph is engaged in double buffering command buffers and synchronizing them with previous frames. Edges are used to automatically insert conveyor barriers before and after a node writes to each instruction buffer. Pipeline barriers synchronize the use of all resources and transfer ownership between queues. In addition, edges insert semaphores between nodes.

Nodes and edges form a directed acyclic graph . Then the rendering graph performs topological sorting.nodes, which leads to the creation of a flat list of nodes sorted so that each node goes after all the nodes on which it depends.

The engine has three types of nodes. AcquireNodereceives an image from a buffer chain (swapchain), TransferNodetransfers data from the CPU to the GPU, and PresentNodeprovides an image of a buffer chain to be displayed.

Each node can implement preRender, renderand postRender, which are executed in each frame. AcquireNodegets an image of a chain of buffers during preRender. PresentNodeprovides this image on time postRender.

I refactored the triangle renderer so that it uses a rendering graph system, rather than processing everything myself. There is an edge between AcquireNodeandTriangleRendereras well as between TriangleRendererand PresentNode. This ensures that the image of the buffer chain is correctly synchronized during its use during the frame.


I swear inside the engine has changed

Day 4


I created a camera and a 3D rendering system. So far, the camera receives its own persistent buffer and descriptor pool.

I slowed down that day because I was trying to find the right configuration for 3D rendering with Vulkan. Most online material focuses on rendering using OpenGL, which uses slightly different coordinate systems from Vulkan. In OpenGL, the Z axis of the clip space is specified as [-1, 1], and the top edge of the screen is at Y = 1. In Vulkan, the Z axis is specified as [0, 1], and the top edge of the screen is at Y = -1. Due to these small differences, the standard GLM projection matrices do not work correctly because they are designed for OpenGL.

GLM has an optionGLM_FORCE_DEPTH_ZERO_TO_ONE, eliminating the problem with the Z axis. After that, the problem with the Y axis can be eliminated by simply changing the sign of (1, 1)the projection matrix element (GLM uses indexing from 0).

If we flip the Y axis, then we need to flip the vertex data, because before that, the negative direction of the Y axis pointed up.


Now in 3D!

Day 5


I added user input and the ability to move the camera with the mouse. The input system is too complicated, but it eliminates the oddities of GLFW input. In particular, I had the problem of changing the position of the mouse while blocking it.

Keyboard and mouse input is essentially a thin wrapper on top of GLFW, opened through signal handlers entt.

Just for comparison - about the same thing Hopson did on day 1 of his project.


Day 6


I started adding code to generate and render voxel blocks. Writing the meshing code was easy because I did it before and knew abstractions that allowed me to make fewer mistakes.

One of the abstractions was a template class ChunkData<T, chunkSize>that defines a cube of type the Tsize chunkSizeof each side. This class stores data in a 1D array and processes indexing data with a 3D coordinate. The size of each block is 16 x 16 x 16, so the internal data is a simple array of length 4096.

Another is to provide an abstraction iterator positions by generating coordinates (0, 0, 0)to(15, 15, 15). These two classes ensure that iterations with block data are performed in a linear order to increase cache locality. The 3D coordinate is still available for other operations that need it. For instance:

for (glm::ivec3 pos : Chunk::Positions()) {
    auto& data = chunkData[pos];
    glm::ivec3 offset = ...;
    auto& neighborData = chunkData[pos + offset];
}

I have several static arrays that specify the offsets that are commonly used in the game. For example, it Neighbors6defines 6 neighbors with which the cube has common faces.

static constexpr std::array<glm::ivec3, 6> Neighbors6 = {
        glm::ivec3(1, 0, 0),    //right
        glm::ivec3(-1, 0, 0),   //left
        glm::ivec3(0, 1, 0),    //top
        glm::ivec3(0, -1, 0),   //bottom
        glm::ivec3(0, 0, 1),    //front
        glm::ivec3(0, 0, -1)    //back
    };

Neighbors26- these are all neighbors with whom the cube has a common face, edge or vertex. That is, it is a 3x3x3 grid without a central cube. There are also similar arrays for other sets of neighbors and for 2D sets of neighbors.

There is an array defining the data needed to create one face of the cube. The directions of each face in this array correspond to the directions in the array Neighbors6.

static constexpr std::array<FaceArray, 6> NeighborFaces = {
    //right face
    FaceArray {
        glm::ivec3(1, 1, 1),
        glm::ivec3(1, 1, 0),
        glm::ivec3(1, 0, 1),
        glm::ivec3(1, 0, 0),
    },
    ...
};

Thanks to this, the mesh creation code is very simple. It simply bypasses the data of the blocks and adds a face when the block is solid, but its neighbor is not. The code simply checks every face of every cube in a block. This is similar to the "naive" method described here .

for (glm::ivec3 pos : Chunk::Positions()) {
    Block block = chunk.blocks()[pos];
    if (block.type == 0) continue;

    for (size_t i = 0; i < Chunk::Neighbors6.size(); i++) {
        glm::ivec3 offset = Chunk::Neighbors6[i];
        glm::ivec3 neighborPos = pos + offset;

        //NOTE: bounds checking omitted

        if (chunk.blocks()[neighborPos].type == 0) {
            Chunk::FaceArray& faceArray = Chunk::NeighborFaces[i];
            for (size_t j = 0; j < faceArray.size(); j++) {
                m_vertexData.push_back(pos + faceArray[j]);
                m_colorData.push_back(glm::i8vec4(pos.x * 16, pos.y * 16, pos.z * 16, 0));
            }
        }
    }
}

I replaced TriangleRendererwith ChunkRenderer. I also added a depth buffer so that the block mesh can render correctly. It was necessary to add one more edge to the rendering graph between TransferNodeand ChunkRenderer. This edge transfers ownership of the resources of the queue family between the transfer queue and the graphics queue.

Then I changed the engine so that it could correctly handle window change events. In OpenGL, this is done simply, but rather confusingly in Vulkan. Since the chain of buffers must be created explicitly and have a constant size, when you resize the window, you need to recreate it. You must recreate all resources that depend on the buffer chain.

All commands that depend on the buffer chain (and now these are all drawing commands) must complete execution before destroying the old buffer chain. This means that the entire GPU will be idle.

You need to change the graphics pipeline to provide a dynamic viewport and resizing.

A buffer chain cannot be created if the window size is 0 on the X or Y axis. Including when the window is minimized. That is, when this happens, the whole game is paused and continues only when the window unfolds.

Now the mesh is a simple three-dimensional chessboard. The RGB colors of the mesh are set according to its XYZ position multiplied by 16.



Day 7


I made the game process not one, but several blocks at a time. Multiple blocks and their meshes are managed by the ECS library entt. Then I refactored the block renderer so that it rendered all the blocks that are in ECS. I still have only one block, but I could add new ones if necessary.

I refactored the mesh so that its data could be updated after it was created. This will allow me to update the block mesh in the future when I add the ability to add and remove cubes.

When you add or remove a cube, the number of vertices in the mesh can potentially increase or decrease. The previously selected vertex buffer can only be used if the new mesh is the same size or smaller. But if the mesh is larger, then new vertex buffers must be created.

The previous vertex buffer cannot be deleted immediately. There may be instruction buffers executed from previous frames that are specific to a particular object VkBuffer. The engine must keep a buffer until these command buffers are complete. That is, if we draw a mesh in a frame i, the GPU can use this buffer before the frame starts i + 2. The buffer cannot be removed from the CPU until the GPU has finished using it. So I changed the rendering graph so that it tracks the lifetime of the resources.

If the rendering graph node wants to use a resource (buffer or image), then it must call the method syncinside the method preRender. This method gets a pointer shared_ptrto a resource. Thisshared_ptrensures that the resource will not be deleted while command buffers are executed. (In terms of performance, this solution is not very good. More on this later.)

Now the block mesh is regenerated in each frame.


Conclusion


That's all that I did in a week - prepared the basics of rendering the world with multiple voxel blocks and will continue to work in the second week.

Source: https://habr.com/ru/post/undefined/


All Articles