CI / CD chaining and Docker automation

I wrote my first sites in the late 90s. Then bring them into working condition was very simple. There was an Apache server on some shared hosting, you could log into this server via FTP, writing something like that in the browser line ftp://ftp.example.com. Then it was necessary to enter a name and password and upload files to the server. There were other times, everything was easier then than now. Over the past two decades, everything has changed a lot. Sites have become more complex, they must be assembled before being released into production. One single server has become a plurality of servers working behind load balancers; the use of version control systems has become commonplace.





For my personal project, I had a special configuration. And I knew that I needed the ability to deploy a site in production, performing only one action: writing code to a branch masteron GitHub. In addition, I knew that to ensure the operation of my small web application, I did not want to manage a huge Kubernetes cluster, or use Docker Swarm technology, or maintain a server park with pods, agents, and all sorts of other difficulties. In order to achieve the goal of simplifying the work as much as possible, I needed to get acquainted with CI / CD.

If you have a small project (in our case we are talking about a Node.js project) and you would like to learn how to automate the deployment of this project, while doing so that what is stored in the repository exactly matches what works in production, I think you might be interested in this article.

Prerequisites


The reader of this article is expected to have basic knowledge of the command line area and writing Bash scripts. In addition, he will need Travis CI and Docker Hub accounts .

Goals


I will not say that this article can unconditionally be called a "training manual." It is rather a document in which I talk about what I learned and describe the process of testing and deploying code in production that suits me, performed in one automated pass.

This is how my workflow ended up.

Except for the code sent to any branch of the repository, the masterfollowing actions are performed:

  • Build the project on Travis CI.
  • All unit, integration and end-to-end tests are performed.

Only for the code that gets into masterthe following holds:

  • All that is said above, plus ...
  • Build the Docker image based on the current code, settings, and environment.
  • Placing an image on the Docker Hub.
  • Connection to the production server.
  • Uploading the image from the Docker Hub to the server.
  • Stop the current container and start a new one based on the new image.

If you know absolutely nothing about Docker, about images and containers - don't worry. I’ll tell you all about this.

What is a CI / CD?


The abbreviation CI / CD stands for "continuous integration / continuous deployment" - "continuous integration / continuous deployment."

▍ Continuous integration


Continuous integration is a process during which developers make commits to the main repository of the project source code (usually to a branch master). At the same time, the quality of the code is ensured by conducting automated testing.

▍ Continuous deployment


Continuous deployment is the frequent automated deployment of code in production. The second part of the abbreviation CI / CD is sometimes disclosed as “continuous delivery”. This, in general, is the same as “continuous deployment”, but “continuous delivery” implies the need for manual confirmation of changes before starting the project deployment process.

Beginning of work


The application on which I mastered it all is called TakeNote . This is the web project I'm working on, designed to take notes. First, I tried to make a JAMStack project, or just a frontend application without a server, in order to take advantage of the standard hosting and deployment capabilities offered by Netlify . As the complexity of the application grew, I needed to create its server part, which meant that I would have to form my own strategy for automated integration and automated deployment of the project.

In my case, the application is an Express server running in the Node.js environment, serving a single-page React application and supporting a secure server API. This architecture follows the strategy found in this full-stack authentication guide.

I consulted with a friend who is an automation expert and asked him what I needed to do to make it work the way I needed. He gave me an idea of ​​what an automated workflow should look like, outlined in the Goals section of this article. The fact that I set myself such goals meant that I needed to figure out how to use Docker.

Docker


Docker is a tool that, thanks to containerization technology, makes it easy to distribute applications, as well as deploy and launch them in the same environment, even if the Docker platform itself works in different environments. For starters, I needed to have Docker command line tools (CLI) at my disposal. The instructions for installing Docker cannot be called very clear and understandable, but you can learn from it that in order to take the first step of installation, you need to download Docker Desktop (for Mac or Windows).

Docker Hub is about the same as GitHub for git repositories, or the npm registryfor JavaScript packages. This is an online repository for Docker images. It connects to it Docker Desktop.

So, in order to get started with Docker, you need to do two things:


After that, you can verify that the Docker CLI is working by running the following command to verify the version of Docker:

docker -v

Next, log in to the Docker Hub by entering, when asked, your username and password:

docker login

In order to use Docker, you must understand the concepts of images and containers.

▍ Images


An image is something like a plan containing instructions for building a container. This is an immutable snapshot of the file system and application settings. Developers can easily share images.

#     
docker images

This command will display a table with the following heading:

REPOSITORY     TAG     IMAGE ID     CREATED     SIZE
---

Next, we will consider some examples of commands in the same format - first comes a command with a comment, and then an example of what it can output.

▍Containers


A container is an executable package that contains everything you need to run an application. An application with this approach will always work the same, regardless of infrastructure: in an isolated environment and in the same environment. The point is that in different environments, instances of the same image are launched.

#   
docker ps -a
CONTAINER ID     IMAGE     COMMAND     CREATED     STATUS     PORTS     NAMES
---

▍Tags


A tag is an indication of a specific version of an image.

▍ Docker Command Summary


Here is an overview of some commonly used Docker commands.
Team
Context
Act
docker build
Form
Assembling an image from Dockerfile
docker tag
Form
Image tagging
docker images
Form
Listing images
docker run
Container
Image-based container launch
docker push
Form
Submitting an image to the registry
docker pull
Form
Download image from registry
docker ps
Container
List container
docker system prune
Image / Container
Removing unused containers and images

▍ Dockerfile


I know how to run a production application locally. I have a Webpack configuration designed to build a ready-made React application. Next, I have a command that starts a Node.js-based server on the port 5000. It looks like this:

npm i         #  
npm run build #  React-
npm run start #  Node-

It should be noted that I do not have an example application for this material. But here, for experiments, any simple Node application is suitable.

In order to use the container, you need to give Docker instructions. This is done through a file called Dockerfilelocated in the root directory of the project. This file, at first, seems pretty obscure.

But what it contains only describes, with special commands, something similar to setting up a working environment. Here are some of these commands:

  • FROM - This command starts the file. It indicates the basic image on the basis of which the container is built.
  • COPY - Copy files from a local source to a container.
  • WORKDIR - Setting the working directory for the following commands.
  • RUN - Run commands.
  • EXPOSE - Port Settings.
  • ENTRYPOINT - Specifies the command to execute.

Dockerfile might look something like this:

#   
FROM node:12-alpine

#        app/
COPY . app/

#  app/    
WORKDIR app/

#   ( npm ci  npm i,     )
RUN npm ci --only-production

#   React-  
RUN npm run build

#   
EXPOSE 5000

#  Node-
ENTRYPOINT npm run start

Depending on the selected base image, you may need to install additional dependencies. The fact is that some basic images (such as Node Alpine Linux) are designed to make them as compact as possible. As a result, they may not have some of the programs you are counting on.

▍ Build, tag and launch container


Local assembly and launch of the container is, after we have Dockerfile, the tasks are quite simple. Before sending an image to the Docker Hub, you need to test it locally.

▍ Assembly


First you need to collect the image by specifying a name, and, optionally, a tag (if the tag is not specified, the system will automatically assign a tag to the image latest).

#  
docker build -t <image>:<tag> .

After executing this command, you can observe how Docker builds the image.

Sending build context to Docker daemon   2.88MB
Step 1/9 : FROM node:12-alpine
 ---> ...  ...
Successfully built 123456789123
Successfully tagged <image>:<tag>

Assembly may take a couple of minutes - it all depends on how many dependencies you have. After the assembly is completed, you can execute the command docker imagesand take a look at the description of your new image.

REPOSITORY          TAG               IMAGE ID            CREATED              SIZE
<image>             latest            123456789123        About a minute ago   x.xxGB

▍Start


The image is created. And this means that on its basis it is possible to launch a container. Since I want to be able to access the application running in the container at the address localhost:5000I 5000:5000installed on the left side of the pair in the next command 5000. On the right side is the container port.

#      5000    5000
docker run -p 5000:5000 <image>:<tag>

Now that the container is created and launched, you can use the command docker psto look at the information about this container (or you can use the command docker ps -athat displays information about all containers, not just working ones).

CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                      PORTS                    NAMES
987654321234        <image>             "/bin/sh -c 'npm run…"   6 seconds ago        Up 6 seconds                0.0.0.0:5000->5000/tcp   stoic_darwin

If you go to the address now localhost:5000, you can see the page of the working application, which looks exactly the same as the page of the application working in the production environment.

▍ Tag Assignment and Publication


In order to use one of the created images on the production server, we need to be able to download this image from the Docker Hub. This means that you must first create a repository for the project on the Docker Hub. After that, we will have at our disposal a place where you can fix the image. The image must be renamed so that its name begins with our username on the Docker Hub. After this should be the name of the repository. At the end of the name can be any tag. The following is an example of naming images using this scheme.

Now you can collect the image with a new name and run the command docker pushto send it to the Docker Hub repository.

docker build -t <username>/<repository>:<tag> .
docker tag <username>/<repository>:<tag> <username>/<repository>:latest
docker push <username>/<repository>:<tag>

#     , , :
docker build -t user/app:v1.0.0 .
docker tag user/app:v1.0.0 user/app:latest
docker push user/app:v1.0.0

If everything goes well, the image will be available on the Docker Hub and it can easily be downloaded to the server or transferred to other developers.

Next steps


To date, we have made sure that the application, in the form of a Docker container, works locally. We uploaded the container to the Docker Hub. All this means that we have already made very good progress towards the goal. Now we need to solve two more questions:

  • Configuring a CI tool for testing and deploying code.
  • Setting up the production server so that it can load and run our code.

In our case, Travis CI is used as a CI / CD solution . As a server - DitigalOcean .

It should be noted that here you can use another combination of services. For example, instead of Travis CI, you can use CircleCI or Github Actions. And instead of DigitalOcean - AWS or Linode.

We decided to work with Travis CI, and in this service I already have something configured. Therefore, now I will briefly talk about how to prepare it for work.

Travis ci


Travis CI is a tool for testing and deploying code. I would not want to go into the intricacies of setting up Travis CI, since each project is unique, and this will not bring much benefit. But I will tell you about the basics that will allow you to get started if you decide to use Travis CI. Whatever you choose - Travis CI, CircleCI, Jenkins, or something else, similar configuration methods will be used everywhere.

In order to start working with Travis CI, go to the project websiteand create an account. Then integrate Travis CI with your GitHub account. During the system setup, you will need to specify the repository you want to automate with and enable access to it. (I use GitHub, but I'm sure Travis CI can integrate with BitBucket, GitLab, and other similar services).

Each time Travis CI is taken for work, a server is launched that executes the commands specified in the configuration file, including the deployment of the corresponding repository branches.

▍ Task life cycle


The Travis CI configuration file, called .travis.ymland stored in the root directory of the project, supports the concept of job life cycle events . These events are listed in the order in which they occur:

  • apt addons
  • cache components
  • before_install
  • install
  • before_script
  • script
  • before_cache
  • after_success after_failure
  • before_deploy
  • deploy
  • after_deploy
  • after_script

▍Testing


In the configuration file, I am going to configure the local Travis CI server. As the language, I chose Node 12 and told the system to install the dependencies necessary for using Docker.

Everything that is listed in .travis.ymlwill be executed when all pull requests to all branches of the repository are executed, unless otherwise specified. This is a useful feature, as it means that we can test all the code coming into the repository. This allows you to know whether the code is ready to be written to the branch masterand whether it will disrupt the process of building the project. In this global configuration, I install everything locally, start the Webpack developer server in the background (this is a feature of my workflow) and run the tests.

If you want to display icons with information about code coverage in your repository, here you can find brief instructions on using Jest, Travis CI and Coveralls to collect and display this information.

So here is the contents of the file .travis.yml:

#  
language: node_js

#   Node.js
node_js:
  - '12'

services:
  #    Docker
  - docker

install:
  #    
  - npm ci

before_script:
  #      
  - npm run dev &

script:
  #  
  - npm run test

Here the actions that are performed for all branches of the repository and for pull requests end.

▍ Deployment


Based on the assumption that all automated tests completed successfully, we, optionally, can deploy the code on the production server. Since we want to do this only for code from a branch master, we give the system appropriate instructions in the deployment settings. Before you try to use the code in your project, which we will consider later, I would like to warn you that you must have a real script that is called for deployment.

deploy:
  #  Docker-     Docker Hub
  provider: script
  script: bash deploy.sh
  on:
    branch: master

The deployment script solves two problems:

  • Build, tag and send the image to the Docker Hub using the CI tool (in our case, it is Travis CI).
  • Loading the image on the server, stopping the old container and starting the new one (in our case, the server runs on the DigitalOcean platform).

First you need to configure the automatic process of assembling, tagging and sending the image to the Docker Hub. All this is very similar to what we already did manually, except that here we need a strategy for assigning unique tags to images and automating login. I had difficulties with some details of the deployment script, such as a tagging strategy, logging in, encoding SSH keys, establishing an SSH connection. But, fortunately, my boyfriend handles bash very well, as well as many other things. He helped me write this script.

So, the first part of the script is sending the image to the Docker Hub. This is pretty simple. The tagging scheme I used involves combining a git hash and a git tag, if one exists. This allows for the creation of a unique tag and simplifies the identification of the assembly on which it is based. DOCKER_USERNAMEand DOCKER_PASSWORDare user environment variables that can be set using the Travis CI interface. Travis CI automatically processes sensitive data so that it does not fall into the wrong hands.

Here is the first part of the script deploy.sh.

#!/bin/sh
set -e #     

IMAGE="<username>/<repository>"                             #  Docker
GIT_VERSION=$(git describe --always --abbrev --tags --long) # Git-  

#    
docker build -t ${IMAGE}:${GIT_VERSION} .
docker tag ${IMAGE}:${GIT_VERSION} ${IMAGE}:latest

#   Docker Hub   
echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
docker push ${IMAGE}:${GIT_VERSION}

What the second part of the script will be completely depends on which host you are using and how the connection to it is organized. In my case, since I use Digital Ocean, doctl commands are used to connect to the server . When working with Aws, a utility will be used aws, and so on.

Setting up the server was not particularly difficult. So, I set up a droplet based on the base image. It should be noted that the system I selected requires a one-time manual installation of Docker and a one-time manual launch of Docker. I used Ubuntu 18.04 to install Docker, so if you also use Ubuntu to do the same, you can simply follow this simple guide.

I'm not talking here about specific commands for the service, since this aspect can vary greatly in different cases. I will only give a general plan of action that is carried out after connecting via SSH to the server on which the project will be deployed:

  • You need to find the container that is currently running and stop it.
  • Then, in the background, you need to launch a new container.
  • You will need to set the local server port to a value 80- this will allow you to enter the site at the address of the form example.com, without specifying the port, and not use the address like example.com:5000.
  • And finally, you need to remove all the old containers and images.

Here is the continuation of the script.

#  ID  
CONTAINER_ID=$(docker ps | grep takenote | cut -d" " -f1)

#   ,  ,  
docker stop ${CONTAINER_ID}
docker run --restart unless-stopped -d -p 80:5000 ${IMAGE}:${GIT_VERSION}
docker system prune -a -f

Some Things to Consider


Perhaps when you connect to the server via SSH from Travis CI, you will see a warning that will not allow the installation to continue, since the system will wait for the user to respond.

The authenticity of host '<hostname> (<IP address>)' can't be established.
RSA key fingerprint is <key fingerprint>.
Are you sure you want to continue connecting (yes/no)?

I learned that the string key can be encoded in base64 in order to save it in a form in which it will be convenient and reliable to work with it. At the installation stage, you can decode the public key and write it to a file known_hostsin order to get rid of the above error.

echo <public key> | base64 #  < ,   base64>

In practice, this command may look like this:

echo "123.45.67.89 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAklOUpkDHrfHY17SbrmTIpNLTGK9Tjom/BWDSU
GPl+nafzlHDTYW7hdI4yZ5ew18JH4JW9jbhUFrviQzM7xlELEVf4h9lFX5QVkbPppSwg0cda3
Pbv7kOdJ/MTyBlWXFCR+HAo3FXRitBqxiX1nKhXpHAZsMciLq8V6RjsNAQwdsdMFvSlVK/7XA
t3FaoJoAsncM1Q9x5+3V0Ww68/eIFmb1zuUFljQJKprrX88XypNDvjYNby6vw/Pb0rwert/En
mZ+AW4OZPnTPI89ZPmVMLuayrD2cE86Z/il8b+gw3r3+1nKatmIkjn2so1d01QraTlMqVSsbx
NrRFi9wrf+M7Q== you@example.com" | base64

And here is what it gives out - a base64 encoded string:

MTIzLjQ1LjY3Ljg5IHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQUJJd0FBQVFFQWtsT1Vwa0RIcmZIWTE3U2JybVRJcE5MVEdLOVRqb20vQldEU1UKR1BsK25hZnpsSERUWVc3aGRJNHlaNWV3MThKSDRKVzlqYmhVRnJ2aVF6TTd4bEVMRVZmNGg5bEZYNVFWa2JQcHBTd2cwY2RhMwpQYnY3a09kSi9NVHlCbFdYRkNSK0hBbzNGWFJpdEJxeGlYMW5LaFhwSEFac01jaUxxOFY2UmpzTkFRd2RzZE1GdlNsVksvN1hBCnQzRmFvSm9Bc25jTTFROXg1KzNWMFd3NjgvZUlGbWIxenVVRmxqUUpLcHJyWDg4WHlwTkR2allOYnk2dncvUGIwcndlcnQvRW4KbVorQVc0T1pQblRQSTg5WlBtVk1MdWF5ckQyY0U4NlovaWw4YitndzNyMysxbkthdG1Ja2puMnNvMWQwMVFyYVRsTXFWU3NieApOclJGaTl3cmYrTTdRPT0geW91QGV4YW1wbGUuY29tCg==

Here is the team mentioned above

install:
  - echo <  ,   base64> | base64 -d >> $HOME/.ssh/known_hosts

The same approach can be used with a private key when establishing a connection, since you may need a private key to access the server. When working with a key, you only need to ensure its safe storage in the Travis CI environment variable, and so that it would not be displayed anywhere.

Another thing that you should pay attention to is that you may need to run the entire deployment script, presented as a single line, for example, using doctl. This may require some additional effort.

doctl compute ssh <droplet> --ssh-command "    && "

TLS / SSL and load balancing


After I did everything that was discussed above, the last problem that arose before me was that the server did not have SSL. Since I use the Node.js server, in order to get the Nginx and Let's Encrypt reverse proxies to work , I need to tinker a lot.

I really didn’t feel like doing all these SSL settings manually, so I just created a load balancer and recorded information about it in DNS. In the case of DigitalOcean, for example, creating a self-renewing self-signed certificate on a load balancer is a simple, free and fast procedure. This approach has an additional advantage, which, if necessary, makes it very easy to configure SSL on a variety of servers running a load balancer. This allows the servers themselves not to “think” about SSL at all, but use the port as usual 80. So setting up SSL on a load balancer is much simpler and more convenient than alternative SSL configuration methods.

Now you can close on the server all ports that accept incoming connections - except for the port 80used for communication with the load balancer, and the port 22for SSH. As a result, an attempt to directly access the server on any ports, with the exception of these two, will fail.

Summary


After I did everything that was described in this material, I was no longer afraid of either the Docker platform or the concept of automated CI / CD chains. I was able to set up a continuous integration chain during which the code is tested before it gets into production and the code is automatically deployed to the server. All this for me is still relatively new, and I am sure that there are ways to improve my automated workflow and make it more efficient. Therefore, if you have any ideas on this subject, let me know. I hope this article has helped you in your affairs. I want to believe that after reading it, you have learned as much as I did, while I figured out everything that I told about it.

PS There is an image in our marketplaceDocker , which installs in one click. You can check the operation of containers on VPS . All new customers are given 3 days free of charge for testing.

Dear readers! Do you use CI / CD technologies in your projects?


All Articles