Why did we switch to Selenide by writing more than 200 new autotests along the way

Hi, I’m a test automation tool for one of the projects of a large company. In this article, I will explain why we decided to switch from Serenity to Selenide. We have a large-scale task, and although changing the technological stack took some time, later it more than paid for itself by accelerating the writing of tests and performing regression.

image

Before we get to the point, a little about the task as a whole.

Unfortunately, I cannot disclose the details of the project under the terms of the NDA. In fact, these are some tools for a call center of one large company - call routing, separation of operators according to call topics, etc. in a beautiful user interface. The interface, by the way, is provided not only for operators, but also for controllers listening and evaluating selected calls. Above all this, there is a system of differentiation of rights and an admin panel that allows you to configure all the built-in functions - from telephony to ratings available to controllers.

The entire volume of functionality is divided into several projects: telephony, user panel and admin panel. All three projects are under active development. And my task is to automate testing on these projects in the broad sense of the word.

For some of the developed functionality, automatic tests already existed, but the specialist who developed them left the company, so there appeared a certain technical duty to automate testing. In total, about 50 autotests were written, but the vast majority of them are completely out of date, because the functionality has gone far ahead. Therefore, initially the task was to update all this.

Stack update


Existing autotests were written using the Serenity library. In fact, they had to be rewritten again, so there was no point in holding onto the existing developments. The library itself was familiar to me, but personally, I do not consider it an optimal tool. So, understanding the amount of work, I decided to switch to Selenide from the very beginning.

Selenide is a fairly popular tool for which there are many ready-made solutions. Some of the features of Selenide are also available for analogues - Atlas, Selenium, HTMLElements, etc. But each of them in its own way did not suit us.

Selenium is the basis of Selenide. But for our purposes it is too low level. It makes no sense to reinvent the wheel when there is a ready-made solution.

Atlas appeared recently. It is raw enough and does not have Groovy support.
HtmlElements is good for everyone, but is deprecated and not supported. There is also JDI, but it has multithreading issues.

Serenity, previously used on the project, was too cumbersome. Everything in it is so interconnected that in order to add some new event handler or interceptor, a dozen classes had to be rewritten (and this did not lead to success every time). Plus, Surenity could not connect Allure, the actual industry and corporate standard for generating test reports.

In Selenide, from an automation point of view, everything is much simpler. For example, there is no need to separately describe the steps - attach to methods, etc. Since it has Allure support out of the box, all actions related to working with web elements automatically fall into the test report.

Of course, Selenide has support for PageFactory, Page Object, and PageElement. This makes the code more readable. Plus, there are internal expectations for the moment when the element appears - there is no need to explicitly state that you need to stop the test for a few seconds before moving on (the timeout limit is set in the configs). Explicit expectations exist separately - you can explicitly redefine the timeout for the necessary elements at each test step.

Conveniently, Selenide has a whole set of various ready-made methods.

Since at the start of the transition from Serenity to Selenide I was alone in the team, the additional advantage was that I already had sufficient experience with Selenide and Groovy, so I could quickly implement ready-made solutions and subsequently spend less effort to support them.

The transition was almost painless. We have implemented Selenide in conjunction with Allure. Allure allows you to create readable and even somewhat beautiful reports, which clearly show which steps fell and with what error. Out of the box, you can attach screenshots of web pages to reports. We even add a video of the run, the source code of the page, the logs of the browser and the web driver.

Migrating existing tests required a minimum of effort. What Serenity has, what Selenide has are PageObjects with @FindBy annotations. Serenity and Allure use the same Step annotations. I had to update the model, the nesting of elements in each other, the order of calling test steps. Some tests were completely deleted, some were rewritten from scratch, and somewhere just updating the locators was enough. In fact, moving has become a trivial task that most UI automators face when designing a web application.

Test Update


After updating the technological stack, they took up the tests themselves. By the way, part of this work has already been completed as part of the transition to a new stack.

Given the accumulated lag behind the functionality of projects, they would still have to be rewritten anyway - this is more profitable in time than to look for inconsistencies in such a volume of code.

In total, about 220 autotests have been written at the moment for both the front-end and the back-end. The emphasis in further development will be on the backend, since most of the functional elements on the front end are already involved in autotests. At the same time, it is constantly changing, so the more tests appear on the front-end, the more forces have to be spent on their support.

Having infinite time resources, we always try to direct efforts to what will allow us to reduce labor costs for support, covering functionality as much as possible. Now the coverage of autotests slightly exceeded 50%.


The rewritten tests on Selenide have become more compact and accurate due to repeated reuse of the code - all thanks to the capabilities of the library itself.

Updating the tests, we took into account that they need to be run simultaneously (in several threads). Tests were previously run on separate virtual machines. But the transition to Selenide allowed us to parallelize the task even more, running them in several threads.

Roughly speaking, the transition to a new framework itself increased the number of simultaneously launched threads from 3 to 8, and the subsequent optimization of tests - up to 50. As a result, 100 UI tests run in just 10 minutes.


Multithreading, by the way, is another advantage for which we chose Selenide. This is not a big layer of boilerplate, you write a minimum of code. There is no superfluous writing that Selenium needs to prepare the project for launch. Everything you need to run the tests is already out of the box.

In parallel with updating and writing new tests, I conducted training for the guys from the testing department, helping them to switch from manual to full-stack testing, so it arrived in the automation regiment on the project. The technology stack used has a lower entry threshold, and the Internet is full of documentation and videos on how to solve certain problems. A month later, I was joined by a couple of testers who could perform tasks related to automation.

Already a whole team, we were able to optimize regression testing. If earlier the regression took 7 days from the team, now the same tasks are performed for 4 days, i.e. We reduced the time of the tests by almost 2 times.

There are many scenarios ahead that have yet to be automated. We automated the Smoke tests almost completely, but now we have switched to the most labor-intensive testing scenarios. This will help further reduce recourse times.

Naturally, we will develop a testing infrastructure. Plans to introduce a Mock server. Now we are testing everything on a real stand, generating test data. It takes time and resources. The mock server will allow you to run autotests without this preliminary preparation (we wrote about the mock server for another project ).

We also plan to implement an autotest record so that you can write a TestRail script based on a workpiece obtained by directly clicking on the interface in the browser. At the output, we get a kind of autotest prototype.

Article author: Yuri Kudryavtsev, Specialist in automated software testing.

PS We publish our articles on several sites of the Runet. Subscribe to our pages on VK , FB , Instagram or the Telegram channel to learn about all of our publications and other Maxilect news.

All Articles