Preparing a server for publishing a web-app in Python

Good day, Habrovsk!

There was free time, and there was a desire to make a small web application. There is an idea (to receive data from a weather sensor, store in the database and then do something interesting), a free server on centos too. Tutorials on setting up seem to be the same too ... But at the time of writing, not a single fully working one was found. If you also want to deploy the application on a CentOS 7.4 server using the python3. * Bundle, uwsgi and nginx, welcome to cat.

So, what should already be and what will not be covered in this article:

  1. A physical or virtual server running CentOS 7.4 (performance is not guaranteed for other operating systems and versions).
  2. Access to the server with superuser privileges * if the server is virtual, then it is assumed that there is the ability and ability to connect to it via SSH).

After connecting as a root user, the first step is to create a new user with administrator rights (this is not necessary, but at least a good tone):

adduser developer
passwd developer

Here you will need to enter and confirm the user password

usermod -aG wheel developer

Add user to local administrators group

su - developer

Switch to the freshly created user.

Now prepare the environment:

sudo yum install -y epel-release
sudo yum install -y python3 python3-pip python3-devel nginx gcc
sudo yum update -y python36 python2
sudo yum groupinstall -y "Development Tools" "Development Libraries"
sudo yum install -y python3-devel python36-virtualenv

I would like to draw attention to the penultimate line - “Development Tools” and “Development Libraries” are necessary for the correct launch of the virtual environment. We use the flag so that the team fulfills without asking for confirmation.

Now you can start creating the working directory:

sudo mkdir /opt/myproject && cd /opt/myproject
sudo chown -R developer /opt/
python3 -m venv myprojectenv
source myprojectenv/bin/activate

pip3 install --upgrade pip
pip3 install uwsgi flask

Now we

vi /opt/myproject/wsgi.py’ll prepare the entry point for the web application: (hereinafter I will use the vi editor, however this is a matter of taste)

def application(environ, start_response):
    start_response('200 OK', [('Content-Type', 'text/html')])
    return ["<h1 style='color:blue'>Testing Success!</h1>"]

This code after launch will allow us to make sure that all the steps taken earlier were successful. To test, run the service:

uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi

With a physical ljcnegt or rdp connection, the result can be checked in a browser at 0.0.0.0:8080. With an SSH connection, open a new session and test with curl:

curl -v 0.0.0.0:8080

The -v flag allows you to see the full output.

After testing is complete, you should terminate the service by pressing CTRL + C (otherwise you have to look for the process and kill it with kill. It’s also an exit, but definitely less convenient).

Now you can make the main web application:

vi /opt/myproject/myproject.py

from flask import Flask
application = Flask(__name__)

@application.route("/")
def hello():
   return "<h1 style='color:blue'>Hello There!</h1>"

if __name__ == "__main__":
   application.run(host='0.0.0.0')

The testing process is quite similar to the previous one - we launch the application using the command python3 /opt/myproject/myproject.pyand check. The result should be the same.

Now edit the entry point so that when it is called, our application starts:

vi /opt/myproject/wsgi.py

from myproject import application

if __name__ == "__main__":
   application.run()

Now the most difficult and interesting. It's time to make a service out of this application that will work automatically and in the background.

First, prepare the ini-file for uwsgi:

vi /opt/myproject/myproject.ini

[uwsgi]
wsgi-file = myproject/wsgi.py

master = true
processes = 2

uid = developer
socket = /tmp/uwsgi/myproject.sock
chown-socket = nginx:nginx
chmod-socket = 666
vacuum = true

die-on-term = true

Here I want to pay attention to the following lines:

  1. socket = /tmp/uwsgi/myproject.sock - to speed up the service you should use unix-socket, and in order for nginx to connect to it, create it in the temporary folder / tmp / uwsgi
  2. chown-socket = nginx: nginx will transfer socket usage rights to nginx user
  3. chmod-socket = 666 - serves the same purpose as the previous one. Different manuals and tips have different parameter values ​​- 664, 665, 777 - but it has been experimentally established that 666 is a minimally working one.

Now you can proceed directly to creating the service:

sudo vi /etc/systemd/system/myproject.service

[Unit]
Description=uWSGI instance to serve myproject app
After=network.target

[Service]
ExecStartPre=-/usr/bin/bash -c 'mkdir -p /tmp/uwsgi chown nginx:nginx /tmp/uwsgi'
ExecStart=/usr/bin/bash -c 'cd /opt/myproject; source myprojectenv/bin/activate; uwsgi --ini myproject.ini'
ExecStartPost=/usr/bin/bash -c 'setenforce 0'
PrivateTmp=false
[Install]
WantedBy=multi-user.target

All interesting lines are in the Service block:

  1. ExecStartPre is a command that the system manager will execute before starting the service.

    - / usr / bin / bash -c announces the start of the next sequence of commands, and 'mkdir -p / tmp / uwsgi chown nginx: nginx / tmp / uwsgi' creates a temporary folder for the socket and transfers the rights to the nginx user.
  2. ExecStart is the command that directly launches the service, and in it we will successively go to the working directory, activate the virtual environment and start the uwsgi-server from the ini file.
  3. ExecStartPost - a command that runs after the service starts. In our case, it is required for the correct transfer of requests from the nginx server to the uwsgi server.
  4. PrivateTmp = false is a parameter that makes the created temporary folder visible to other processes.

Now run the service:

systemctl start myproject
systemctl status myproject
systemctl enable myproject

The last command will make it autostart after a server reboot.

And the last jerk - we will configure the nginx server and make our web application available for the external network. The log of his work can always be viewed using the command

journalctl -u  myproject

sudo vi /usr/lib/systemd/system/nginx.service

Find the [Service] block and add PrivateTmp = false at the end.

After that, reboot the daemons with the command. systemctl daemon-reload

Now we proceed directly to the server configuration:

vi /etc/nginx/nginx.conf

http {
    . . .

    include /etc/nginx/conf.d/*.conf;

    server {
        listen 80 default_server;

        . . .

Find the http block and add a new server:

server {
    listen 80;
    server_name server_domain_or_IP;

    location / {
        include uwsgi_params;
        uwsgi_pass unix:/tmp/uwsgi/myproject.sock;
    }
}

Now it remains to turn on the server and edit the rules on the firewall:

systemctl start nginx
systemctl enable nginx

firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --zone=public --permanent --add-service=https

firewall-cmd --reload

Now, when accessing the server by its domain name or IP address, we will receive a response from our Flask application.

I hope this material will be useful.

All Articles