Docking an application built on the basis of React, Express and MongoDB

The author of the article, the translation of which we are publishing today, wants to talk about how to pack React, Express and MongoDB-based web applications into Docker containers. Here we will consider the features of forming the structure of files and folders of such projects, creating files Dockerfileand using Docker Compose technology.



Beginning of work


For the sake of simplicity, I assume that you already have a working application, presented by the client and server parts, connected to the database.

It is best if the client and server code are located in the same folder. The code can be located in one repository, but it can be stored in different repositories. In this case, the projects should be combined in one folder using the command git submodule. I did just that.


Parent Repository File Tree

React Application


Here I used a project created using the Create React App and configured to support TypeScript. This is a simple blog containing several visual elements.

First, create a file Dockerfilein the root directory client. In order to do this, just run the following command:

$ touch Dockerfile

Open the file and enter the commands below. As already mentioned, I use in my TypeScript application, so I need to build it first. Then you need to take what happened and deploy it all in the format of static resources. In order to achieve this, I use the two-step process of building a Docker image.

The first step is to use Node.js to build the application. I use, as a base image, an Alpine image. This is a very compact image, which will beneficially affect the size of the container.

FROM node:12-alpine as builder
WORKDIR /app
COPY package.json /app/package.json
RUN npm install --only=prod
COPY . /app
RUN npm run build

So begins ours Dockerfile. First comes the team node:12-alpine as builder. Then we set the working directory - in our case, this /app. Due to this, a new folder will be created in the container. In this container folder, copy package.jsonand install the dependencies. Then in /appwe copy everything from the folder /services/client. The work is completed by the assembly of the project.

Now you need to organize hosting for the newly created assembly. In order to do this, use NGINX. And, again, this will be the Alpine version of the system. We are doing this, as before, to save space.

FROM nginx:1.16.0-alpine
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Here, the nginxresults of the project assembly obtained in the previous step are copied to the folder . Then open the port 80. It is on this port that the container will wait for connections. The last line of the file is used to start NGINX.

This is all that is needed to docker the client part of the application. The resulting Dockerfilewill look like this:

FROM node:12-alpine as build
WORKDIR /app
COPY package.json /app/package.json
RUN npm install --only=prod
COPY . /app
RUN npm run build
FROM nginx:1.16.0-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Express API


Our Express API is also quite simple. Here, to organize the endpoints, RESTful technology is used. Endpoints are used to create publications, to support authorization, and to solve other problems. Let's start by creating Dockerfilein the root directory api. We will act as before.

During the development of the server side of the application, I used the capabilities of ES6. Therefore, to run the code, I need to compile it. I decided to process the code using Babel. As you may have guessed, here again the multi-stage assembly process will be used.

FROM node:12-alpine as builder
WORKDIR /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install
COPY . /app
RUN npm run build

Everything here is very similar to the one Dockerfilewe used for the client part of the project, so we will not go into details. However, there is one feature:

RUN apk --no-cache add --virtual builds-deps build-base python

Before storing passwords in the database, I hash them using bcrypt . This is a very popular package, but there are some problems with using it in Alpine-based images. Here you may encounter something like the following error messages:

node-pre-gyp WARN Pre-built binaries not found for bcrypt@3.0.8 and node@12.16.1 (node-v72 ABI, musl) (falling back to source compile with node-gyp)
npm ERR! Failed at the bcrypt@3.0.8 install script.

This is a widely known issue. Her solution is to install additional packages and Python before installing npm packages.

The next step in building the image, as in the case of the client, is to take what was formed in the previous step and run it using Node.js.

FROM node:12-alpine
WORKDIR /app
COPY --from=builder /app/dist /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install --only=prod
EXPOSE 8080 
USER node
CMD ["node", "index.js"]

There is another feature here, which consists in installing only those packages that are designed for the project to work in production. We no longer need Babel - after all, everything was already compiled in the first step of the assembly. Next, we open the port 8080on which the server side of the application will wait for requests to arrive, and run Node.js.

Here is the summary Dockerfile:

FROM node:12-alpine as builder
WORKDIR /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install
COPY . /app
RUN npm run build
FROM node:12-alpine
WORKDIR /app
COPY --from=builder /app/dist /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install --only=prod
EXPOSE 8080 
USER node
CMD ["node", "index.js"]

Docker compose


The last phase of our work is to bring the containers apiand clientthe container containing MongoDB. In order to do this, we will use the file docker-compose.ymllocated in the root directory of the parent repository. This is done due to the fact that from this place there is access to files Dockerfilefor the client and server parts of the project.

Create a file docker-compose.yml:

$ touch docker-compose.yml

The project file structure should now look like the one below.


The final structure of the project files

Now we will add to thedocker-compose.ymlfollowing commands:

version: "3"
services:
  api:
    build: ./services/api
    ports:
      - "8080:8080"
    depends_on:
      - db
    container_name: blog-api
  client:
    build: ./services/client
    ports:
      - "80:80"
    container_name: blog-client
  db:
    image: mongo
    ports:
      - "27017:27017"
    container_name: blog-db

Everything is arranged very simply. We have three services: client, apiand db. There is no selection for MongoDB Dockerfile- Docker will download the appropriate image from its hub and create a container from it. This means that our database will be empty, but for starters, this will suit us.

In the sections apiand clientthere is a key buildwhose value contains the path to the files of the Dockerfilecorresponding services (to the root directories apiand client). The container ports assigned in the files Dockerfilewill be open on the Docker Compose-hosted network. This will allow applications to interact. When configuring the service api, in addition, the key is useddepends_on. He tells Docker that before starting this service, you need to wait until the container starts up completely db. Thanks to this, we can prevent errors in the container api.

And - here's another little thing related to MongoDB. In the backend code base, you need to update the database connection string. Usually it indicates localhost:

mongodb://localhost:27017/blog

But, using the Docker Compose technology, we have to make it point to the name of the container:

mongodb://blog-db:27017/blog

The final step of our work is to start all this by executing the docker-compose.ymlfollowing command in the project root folder (where the file is located ):

$ docker-compose up

Summary


We looked at a simple technique for containerizing applications based on React, Node.js, and MongoDB. We believe that if you need it, you can easily adapt it for your projects.

PS We launched the marketplace on the RUVDS website. The Docker image there is installed in one click. You can check the operation of containers on the VPS . New customers are given 3 days free of charge for tests.

Dear readers! Do you use Docker Compose?


All Articles