Do autotests think about electric bugs

Recently, testing automation has been called the "silver bullet" from all the problems of the project. Many people start automation very spontaneously and lightly, without calculating all the pros and cons, pros and cons, maintenance and payback. 

In general, test automation is a very expensive and specific tool. Therefore, it must be approached with the proper level of maturity of the code and the project itself. Otherwise, you can spend millions of hours and money, and the effect is microscopic or not at all.

In this article I tried:

  • to illuminate the "children's sores" of test management, seeking to automate everything that is not pinned,
  • Explain how the testing automation can benefit the project budget without a detailed analysis of its scope and proper preparation,
  • compile a roadmap to prepare for project automation.

Source 

The trend is that now testing automation plugs holes in all projects, actually nailing nails with a microscope. It comes to the point that it becomes difficult for a tester without automation skills to find a job, because in most vacancies, skills in test analysis and test design are missing, but experience in the automation of something is required.

I want to believe that over time we all will have an obsessive desire to automate everything all at once, but for now a checklist would help us a lot to determine whether auto tests in the project are needed and whether the project is ready for them. 

Actually, with these thoughts, I went to make a presentation on “Strike-2019” and then wrote this post based on it.

We will talk about large, serious end-to-end autotests that automate regression in terms of UI and web services. And also what is connected with them. We will not touch on the topic of Unit tests that developers write or should write. This is a separate layer of self-testing, and many articles have already been written about it:

 
I will not name the project, I will only say that this is an information system designed for several tens of millions of users. It is integrated with several state portals, as well as with dozens, if not hundreds, of regional and commercial portals. Hence the increased requirements for reliability, fault tolerance and security. Well, in general, so that it all "spins and does not fall." 

Our credo on all LANIT testing projects is continuous improvement. In general, everything that allows us to test faster, better, higher, stronger, saves testers time and effort, and, as a result, the budget. We have implemented on this project, probably all the best practices that allowed us to meet deadlines and tasks. As a result, of the large unrealized chips, only the regression automation remained. This topic itself for quite some time was in the air, and for a long time we refused it, resting on all our limbs, because the profit was not very clear. But in the end they decided to automate. And then there was a prolonged leap into cold water.
 

A small digression. Automation Methods


There are two main ways to automate UI testing. 

Source

Automation by hand testers


You don’t have to go far for examples - everything is on HabrĂ©. I will not name the company. Those who were interested in the topic probably saw these one and a half hour webinars, how cool they are, how cool they are, how the whole team of hand-held testers learned Java from them, went to automate, everything is covered, everything is great, everything works, a bright future has already arrived. There is such an approach. 

Design approach


And there is a second approach: a team of auto-testers is recruited - with experience, with knowledge and skills in OOP, and automation is performed directly by the forces of this team, and a team of manual testers acts as customers and experts. Yes, they can flow into the automation team after training, selection, but they never combine two roles at the same time.

Below I described the features of these approaches, but I do not specifically mark them as “pluses” or “minuses” - everyone will determine the sign for himself.

Features of automation "on their own"


Source

1) When we automate with the help of manual testers, we get a quick effect “look, this test took a day before, now I can replace it with a robot, it takes 2 days, but I don’t participate here”. Naturally, this improves qualifications and broadens the horizons of specialists: they begin to understand the code. But there is no clear, tangible result for the business. How many hours were spent on development, and how much could be spent if a person with experience did this? How many hours are saved? How often will the test case be launched? Where? Who to accompany? How much does it cost? This is not very good for me, because, unfortunately, I have not yet seen customers who are willing to pay money endlessly - for the process. Therefore, a clear, tangible result is always important to me.

2) No deadlines. On the one hand, it’s good: no one encourages the team, “let's quickly automate everything, let's quickly learn”, the tension does not grow in a person. He continues to test with his hands and calmly plunges into automation. On the other hand, there are no deadlines, we can’t ask about the results, and we don’t understand when it will be ready.

3) There is no code support and continuity. On the one hand, different approaches, surveys give rise to better writing methods, sometimes, rewriting from scratch, you can speed up the autotest several times. But it is megatrack. In addition, who will accompany all this if the specialists leave the project, and this happens if they leave for another business area or another team? Also not very clear.

Features of the project approach


Source

1) In this case, we are already talking about the project. And the project is what? These are resources, time, money. Accordingly, the budget is being calculated here, taking into account all the nuances that are in this project. All risks are calculated, all additional work is calculated. And only after the budget is agreed, a decision is made to launch.

2) Based on this, the preparation phase is likely to be not fast, since the budget calculation for something should be built.

3) Naturally, increased demands are placed on those specialists who will participate in the project. 

4) Here I will also write down complex infrastructure solutions, but more on that later. 

5) Modernization of existing testing and release processes. Because autotests are a new element of the team. If it was not previously provided, respectively, you need to integrate it into the process. Auto-testers should not hang on the right, left, side of the project.

6) The project approach gives regularity, consistency, although, on the other hand, its implementation can be slower than the implementation of the first approach.

7) Well, reporting. A lot of reporting. Because for any budget you will be asked. Accordingly, you should understand how autotests work (bad, good), what trends, what trends, what needs to be increased, what should be improved. This is controlled by reporting.

Long road to a brighter future


Disclaimer: We were so smart right away (actually not).

Source

Here is a classification of the problems that we have encountered. I will analyze each one individually. I will do this on purpose, so that you take into account this "rake". Because these problems, not being solved at the start of the project, will directly affect, at least, its duration, at the maximum - they will increase its budget or may even close the project.

Source

  • Different teams - different approaches.
  • Weak immersion of automation in the functional.
  • Non-optimal structure of test cases.
  • Documentation of the framework.
  • Communication problems.
  • Timely purchase of licenses.

Different approaches to automation


Source 

Torture number of times


The first attempt we had was according to the first model (see method number 1). A small (but proud) initiative group of testers decided to try automation. By and large, we wanted to look at what came out of it all. Now, of course, we would not do this, but then there was no experience, and we decided to try it. 

Source

We had one team lead with automation experience, 3 burning with the desire to automate the tester and a lot of desire to master this path. True, Timlid was a newcomer and could not devote much time to the project, but the positive effect of his work was that we wrote our own framework. We looked at the frameworks that were there, they were expensive and cool or free, and the instruction “attached to the file to taste after assembly” was attached to them. In general, for a number of reasons, we could not use them. Accordingly, we decided to write our own framework. The choice and writing process itself is a topic for a separate article, or even not one.

Not to say that this attempt was a failure, it simply outlived itself and ended. When we realized that we needed a budget, we needed additional forces, we needed skills, we needed another team organization. We automated about 100 cases, saw that it worked, and rounded off.

Attempt number two


Source 

Nothing beckons like a bitten sandwich.

After a while, we again returned to the topic of automation.

But, remembering the first experiment, we switched to method No. 2. Here we already had a rather “skillful” team, automating more than one project. But here we are faced with another disaster. This team followed the lead of the UI testing team. How did this happen?

“We want to automate this!”
- Maybe we’ll think about it.
- No, we don’t want to think anything, we want these autotests.

With titanic efforts, they automated it, and it even worked. But over time, the stability of launches began to decline, and when we assembled our own team of automation engineers and began to hand over the project to them, it turned out that half of the tests were done on crutches, half checked what the machine intended, and not what the manual tester wanted. 

And all because autotests ran on raw code, which was subjected to daily corrections. Those. the initial set of cases was not correct. We had to plunge ourselves into the topics that we want to automate, from the point of view of its automation (hereinafter, butter). It is very critical to approach the cases that we give away for automation, and at some point discard some of them, separate some of them, and so on. But we did not. 

What is the result. We automated another piece, about 300 cases, after which the project ended, because there was no stability of launches and no understanding of how to accompany this either. And we understood how to not do it ... for the second time.

Attempt number three


Source 

For the third attempt, we approached, like a shy doe to a watering hole. 

Around saw risks (and breakdowns of terms). They approached critically, first of all, to test cases and their authors - UI testers - as process customers. We found the same critical-thinking team of automation engineers and started a normally-calculated (as we thought), fully prepared (ha ha) project.

The rake was already lying and waiting for us.

Weak immersion of automation in the functional


Source 

At the first stage (hereinafter referred to as the third attempt), when communications were still poorly established, auto-testers worked behind a certain screen: they received tasks, they automated something there. We ran autotests. We saw statistics that everything is bad with us. Digging autotest logs. And there, for example, a fall on a spelling error in the name of the file being uploaded. It is clear that the tester, who tested this functionality with his hands, will start a minor and jump to check further. The autotest falls and knocks the whole chain, which is based on its basis. 

And when we began to immerse the auto-testers in the functional, to explain what exactly we check in cases, then they began to understand these "children's" mistakes, how to avoid them, how to get around them. Yes, there are typos, there are some inconsistencies, but the autotest does not fall, it simply logs them.

Non-optimal test case structure


Source 

This is probably our biggest headache. She gave the most problems, the most time costs, the most hours and money we lost on them. As a result, now we will solve such a problem first of all, if we automate something else.

Our project is quite large, several dozen information systems are spinning in it, they are united by working groups. And it seems that the standards for writing cases are the same for everyone, but in one group this piece is called “function”, in another it is called “authority”, and the autotester reads both “function” and “authority”, and falls into a stupor. This is just an example. In fact, there were hundreds of such situations. I had to standardize all this, comb my hair.

What else have you encountered besides such ambiguities? We did not find atomic test cases. These are test cases, which at some of the steps can be performed variably. For example, in the condition, it says “you can perform step 2 under such an authority and under such an authority”, and in step 3, for example, depending on the authority used, “press either button“ A ”or button“ C ”. From the point of view of a manual tester, everything is fine. The tester understands how to pass it, the autotester does not understand how to pass it. In a bad version, he himself will choose the path, in a good one, he will come with clarification. We spent a fair amount of time converting non-atomic test cases.   

Framework Documentation


Source

You always need to think about those who come after you. How they will parse your code, analyze, etc. It is good if there are competent engineers, programmers. Bad if they are not. Then you can also face the fact that you need to disassemble the legacy of past "victories", document again, attract additional people, spend extra time.

Communication problems


Source  

1. Lack of regulations on interaction.

A team of automation came, they don’t know how to communicate with the team of manual functional testing, and nobody, in fact, knows what kind of people they are. Yes, there are leads that communicate with each other, and all the rest are just project neighbors. 

2. The presence of regulations for interaction

Then the regulations were written, but the guys worked for some time without each other, and when the regulations were written, they took this as the only tool for interaction. Everything that went beyond it was ignored. 

That is, at first the guys simply did not know how to communicate with each other, it seems they were in the same chat rooms, but whether they can ask questions here or not, they do not know. And when they had already worked for some time in such conditions, they developed their own informal, closed communities: we are “hand-guards”, we are automators. How to communicate? Here we have the regulation - according to the regulation. 

Timely purchase of specialized software licenses


At some point, it turned out that for the development of some cases we still need paid software, but there is no license for it. I had to buy it in a fire order (again, additional costs in money and downtime). 

Roadmap


Istonik 

Accordingly, now we have Roadmap, how to launch such projects, it consists, in fact, of stages, each stage is divided into certain points.

Preliminary stage


Source 

We need a team lead


Timlid, an architect, in general, the one who will be with us all the way, the one who understands automation, who is technically savvy, competent. It is advisable that this be a developer with 5 years of experience in certain programming languages ​​that are used on our project. Because one way or another, our framework will work with our project. Accordingly, it is best if both the framework and the project use the same technology stack.

There must be a focus group


Moreover, this should not be a focus group of automation. These should be persons who will make decisions in the future. It is better to make them friends at the very beginning, so that there is an understanding of what decisions they make, why and why.

Assessment of the status of the test case base


I already spoke about the assessment of the state of the test case base above, respectively, this is also done at the very preliminary stage.

We find out what is not subject to automation


Often there is a desire to automate everything that moves (and everything that does not move, move and automate), in fact, about 40% of test cases are usually so expensive to implement that in principle they will never pay off. Therefore, here you need to be very clear about what you want: to automate everything and fly into the pipe, or you want to automate a certain piece of functional testing that will help you.

Evaluation of a pilot project


We evaluate the pilot project at the preliminary stage (how much it will cost) and execute it on the most difficult (note) cases.

Pilot


Source 

Normalization of test cases


The collected case pool is subject to normalization. Ambiguities and unnecessary preconditions are eliminated. 

Framework preparation


We write our framework, add the existing one or use some kind of purchased one.

Infrastructure


We are preparing infrastructure solutions.

Here it is very important not to lose: you will have an irresistible desire to use at the first stage some kind of home computer under the table to run autotests. This is not necessary (the tests will slow down and fall off when someone accidentally knocked out a computer or spilled coffee on it). It is necessary to use ready-made infrastructure solutions, virtual machines, watch the practice of application. Accordingly, immediately calculate its power both for this project and for the next big one. To do this, we need an automation specialist.

Subtotals and adjustments


We are writing the first cases. We evaluate the speed of all this happiness. We are evaluating the additional needs for people, since we already understand how much these cases will be automated. That is, we automated five pieces, now we need to understand how many people we need in order to automate, conditionally, another 5 thousand. Some additional licenses, hardware for the stand, both for the stand that will run autotests, and for the stand of the application itself. Well, and, in fact, the need to finalize test cases - how bad everything is.

Summing up the pilot


We summarize, write a report and make a decision whether we will go into automation or not.

I already wrote earlier that it can happen that we don’t go. That is, if, for example, the payback is 18 years, and the term of your project is 5, you should think about why you need it.

Launch stage


Source 

Items are listed sequentially, but in fact they should all be done in parallel.

  • We start the selection of the team.
  • We determine the leads.
  • Let's prioritize the case studies.
  • We normalize test cases.
  • We solve "infrastructure difficulties."
  • We write regulations and instructions, establish communications, eliminate bottlenecks.
  • We improve the framework for the simultaneous work of several autotests and parallelization of groups of running tests.
  • We make a reporting and statistics module (one-time and cumulative).
  • We begin writing autotests.

Main stage


Source 

At the main stage, everything is simple (haha): autotests are written, written support is provided, launch results are evaluated, management decisions are made, powers are tightened, streams are added, and, in fact, communication and again communication with the UI team. Everyone should sail in the same boat and row in one direction.

Escort stage


Source 

The escort stage is slightly different from the main stage. A significant difference is in its duration. In addition, it has a much smaller percentage of new autotests being developed. According to our estimates, 6-10% per release, otherwise it is very similar to the main stage.

What is the result?


We automated about 1,500 end-to-end cases. The stability of successful launches has been holding several releases at around 92-95%.

The costs of recourse decreased by almost 2.5 times. The runs themselves take place in 3-4 hours, this is done at night, so that in the morning to have ready-made results.

Details of the technical implementation are outlined in a series of articles by my colleagues:


If we start now, taking into account everything that I wrote about, I think that we will greatly save our nerves, time and budget.

Thank you for the attention. Ask questions, we will discuss. 

Source 

And also we are waiting for our team of young testers!
, , .


All Articles