Portal of test environments, or Save our devops



A couple of years ago, we felt in some kind of surreal dream. Everyone around went to the cloud for testing (it’s convenient to expand and collapse test environments), and we tried to find out what tools out of the box should be delivered. To do this, we, together with customers, figured out how the devops processes work. And it turned out that only a few companies in Russia competently use automation.

I’ll immediately explain that for the most part we talked either with those who are engaged in the development of up to 150-200 people in the company, or with industries where IT is traditionally difficult. Larger companies usually have both a process and their own cloud, and they come to us for backup deployment.

Production is usually well-established. There is a cycle, release plan, there is a goal, the code goes to the goal together with the developers.

Testing and QA are also well debugged most often.

And between them there is an abyss. And DevOps is trying to fill it. This superman should take the release (and ideally assemble it in Jenkins or something similar), create a car, deploy everything there, check the work, maybe spend a couple of pretests and give it to QA.

What is the problem?


When a product is a small type of web application, then there is no problem. Directly, the developer or tester takes the database from the backup, connects to the latest release and runs on.

But when the product grows a little, production collapse sets in.

Devops got the release, but he took it and did not intend to. Then he begins to look for the extreme and sort out the developers. A release can be collected for several hours at night, and developers usually sit in the office during the day.

Almost everything is done by hand. That is, the ops is sitting and looking at the progress bar of the assembly. Because anywhere can go wrong. Those who are more energetic bind all this with their own scripts, and sometimes it turns out very beautifully and efficiently. But much more often we see that the delivery delivery plan is carried out in steps, a separate person is responsible for each step, and it is easier for him to repeat the procedure twenty times with his hands, because otherwise it turns out that it does not work.

At the same time, someone should be responsible for the process as a whole, and this is the main stopping factor. An attempt to find such a person often ends in failure. Namely, he is responsible for automation, and it is he who is interested in it. Responsible for individual sections and then transfer the arrows to someone in development, of course, is always easier.

Virtualization adds more aspects of chaos. Virtual environments are always a mess. The person who is responsible for the grouping (server, rack), he is poorly oriented, to whom that belongs, whether it is necessary or not. It is logical that the system administrator does not have to worry much about what's going on with the developers. But devops should, but his role usually does not imply such knowledge. And he is afraid to turn off the excess, because he does not understand whether it will be needed yet or not.

Then it is necessary to issue internal accounts for the mutual settlements of departments. Someone considers uptime, someone considers the use of the processor, then they share equally or in some proportions the costs like electricity and the work of administrators.

Unexpectedly, but no finished product


It would seem that this entire conveyor should be covered with some kind of product. There are many good point solutions. The same Ansible perfectly deploys, but does not know how to steer virtual machines. And hypervisors can. There are all the tools for devops-scripting, you can connect the modules ... You just need to take and assemble from this software chain, right?

Not this way. Banks and state-owned companies came with the desire to test in the cloud. Every good security guard is prone to paranoia, often with good reason. For IB Bank, each new installation is a very annoying factor. I know one case when ops dragged Jenkins and Terraform into the infrastructure, deployed bash behind it, and then the DBMS that holds all this. It turned out to be a good semi-automatic conveyor, which could be finished up to a fully autonomous deployment. Only for information security it was a disaster.

They wanted all in one. And to manage the virtual machine (and various vendors, including Openstack). The Customer on one application can have Vmvar, and Hyper-in, and something else old and terrible for supporting FreeBSD or OS / 2.

We need our bike


In general, we wrote our platform. Under the hood - Ansible, out of the box - integration with Jenkins. You can write your own scripts for Ansible. You can do everything from subnet collection to release management.



The portal lives in the paradigm of the test environment. This is its main essence. One test environment = one subnet. Plus, if it's really bad - there is integration with the RPA in case there is no API, and you need a robot to emulate user actions and click on buttons in the interface. There is billing, uptime and utilization are considered either from the application to the application: until the destruction request is written, the money will be dripping.

This is how it looks. Creating an environment using a template:



Adding a virtual machine:



Scripting:



As it turned out a little later, the complaints “I do not want to run on 50 systems” are heard from all sides. We were in an important pain. Any large customer with testing wanted something similar, but for some reason did not solve it or solved it organizationally with script people. The problems are very different, starting from nameless virtual computers (they deleted it, and then someone remembered that it was needed) and ending with the fact that no one wants to be responsible for the rolling scripts. Roll-up scripts are hard to write, regulations also suffer. Somewhere in the cycle there should be anonymization of the data, and it looks like “Sometime at the beginning of the year we made an anonymous database”, and the software changed six times, the data was updated.

In general, if something similar suddenly hurts you, then just come to watch. Demo access is available in the Technoserv Cloud .

All Articles