What we learned from testing the state information system

Hello everyone! 

I lead the testing sector in the system analysis and testing department of the LANIT corporate systems department. I have been in this field for 14 years. In 2009, for the first time I was faced with testing the state information system. And for LANIT, and for the customer - it was a huge and significant project. It has been in commercial operation for more than nine years.

Source

This text will introduce you to the approach to testing GIS, which is used in our company. In particular, you will find out (the link leads you to the fragment of the article referred to in a specific paragraph):


Before LANIT, I worked in a testing group of five people, where the tasks were assigned to the team leader. When I came to LANIT, I was assigned to manage a geographically distributed testing team of four testers, who were involved at the start of the project to test the state information system. As the project developed, the number of testers in the group increased in proportion to the increase in functionality.

Chapter 1. Start of the first GIS testing project


When we started working with GIS, first of all, we had to deal with large volumes of functionality (several dozen subsystems, each with up to hundreds of functions) that needed to be tested in a short time. The team had the task not to get confused in the scope of functions and minimize the risks of missed defects.

The normative documentation, on the basis of which technical specifications were developed, was constantly changing, and the whole team had to adapt to innovations: we revised the technical specifications for the development and the priorities of the project (as a result, the testing plan changed). 

It was difficult for analysts to adapt to the frequent change of regulatory documentation, which led to a lack of understanding of how to keep technical specifications up to date, how to always have a set of relevant specifications for each version of the system, how and where to display the degree of influence of a change in one specification on many other related documents . 

Since the result of the work of analysts was used by developers and testers, questions about the relevance of technical specifications, the comprehensibility of the history of specifications, the conformity of the set of technical specifications of the upcoming release version, were acute for all participants in the project team. The consequences of confusion with the documentation, the versioning of technical specifications affected the cost of the project.

Sometimes the situation turned 180 degrees. Agree, when a train rushes at high speed, it is impossible to change directions sharply without consequences.

I regularly attended meetings and understood the reason for the changes in the project. The most difficult part is to explain to remote testers why we tested the new registry for a month, and now we need to forget everything and get ready to test the completely redesigned functionality of this registry. People began to feel like cogs in a giant machine, and their work was considered completely useless.

At first, such changes annoyed the team and strongly demotivated. But the team accepted the fact that it is impossible to influence the change in technical specifications, but you can learn how to work with it. In that difficult period for the project, the testing team had a new task, which was usually absent on smaller projects - testing requirements. 

As a result, the disadvantages of changing requirements turned into advantages for testers in the form of a new task for testing and the ability to influence the final result of the project. 

In addition to interacting with analysts, the testing team turned out to be dependent on communications with external systems with which they must integrate. Not all of them were ready to provide their test circuit and inform third-party systems about their release dates and exchange information about changes in services. Such a desynchronization in communications or a careless attitude towards notifying the composition of changes to web services by external systems led to product errors and difficulties in conducting integration testing. Testers had to establish communications with external systems, develop the skill of testing integration and attract developers to implement stubs.

The entire testing team participating in the project was faced with the need to immerse the development team in the production process. At the start of the project, the development team began to look for new approaches to the organization of work, introduced the branching models Feature Branch Workflow and the Gitflow model. The development of small projects previously did without a bunch of branches, everyone was comfortable. But faced with the inability to stabilize the version for a couple of months for the next demonstration of the intermediate stage of the project to the customer, after analyzing the reasons, the development manager and architect came to the conclusion that the process of creating software needs to be changed. Then they began to actively use Feature Branch Workflow and Gitflow on the project. Another new task for testing appeared - to study the working principles of development models in order to adapt to the process of creating software.

GIS involves dividing a project into functional blocks, each of which includes a set of components that are closely related to each other by business and / or performing an independent technical function. If at the start of the project all testers checked the newly arriving functional blocks, and everyone in the team was interchangeable, had an equal degree of knowledge of all blocks, then as the project expanded, the number of testers also had to be increased and divided into groups. The growth of the team led to the process of separation into test groups, the allocation of project roles within each group.

As the project developed, features of state information systems began to appear. 

Chapter 2. Features of GIS. How testers live and work with them


First of all, large GIS is subject to increased requirements for reliability and load, system operation - 24/7, system malfunctions should not lead to data loss, system recovery time - no more than 1 hour, response time - no more than 2 seconds and much more. 

Since it was a web portal, testers had to plunge into the testing process of multi-user web portals and build an approach to test design and the test process, taking into account the peculiarities of the web interface.

A web application can be used by a large number of users at the same time. It was required to provide load testing of the open part of the GIS used by all users, to predict the load model and conduct load testing.

The user can have their own access levels. It was required to test the matrix of user roles in the application administration subsystem using test design techniques.

Users can access one entity, which leads to competitive access. So that the input data of one user does not overwrite the data of another, we had to conduct a test analysis of situations in which it is possible to simultaneously change the data in users' personal dashboards, to include in the tests the checks of correct testing with diagnostic messages.

One of the features of the system was the use of the SphinxSearch search engine .with which the testing team did not know how to work. To understand the intricacies of testing, Sphinx had to consult with developers and understand how data indexing occurs.

Testers mastered the features of testing search by word forms, word fragments, synonyms, by searching within the attached documents, they began to understand why the newly created search data did not appear in the search result, and whether this was an error. 

The project had a subsystem of applied administration, which included not only user registration, but was also complicated by the presence of a matrix of user roles. It was configured in the personal accounts of administrators of organizations. The number of combinations of checks of the role matrix was huge and the number of types of organizations was also not small, that is, the number of combinations of checks grew exponentially. It was necessary to change the familiar approach to designing tests used earlier on small projects. 

Since the system assumed a web interface, it was necessary to provide cross-browser testing, which was not originally planned. When the project was just starting, Internet Explorer 7.0 was the only browser supporting domestic cryptography, and the majority of users used this browser. Therefore, at the start of the project, to test the logic and functionality of the work of personal accounts, only IE of this version was used, but for the open part of the portal, support was required for all browsers existing at that time. However, at that moment they did not think about cross-browser testing.

When they asked me: “How does the system behave in all known versions of browsers?” - I was in a panic, to put it mildly, since the volume of the test model was huge (about 4,000 test cases), the regression test set was about 1,500 test cases, and the testing team checked the whole crowd exclusively on one browser selected by default. This case had to be solved very quickly and used ingenuity in order to catch the deadlines for the first release and cover the main browser versions with tests.

On the Internet then there were few articles on testing, development of test models. For our team, at that time, an incomprehensible task was how to create, where to store and how to keep a large test model up to date. It was not clear how to get away from research testing, which could become infinite, and there were no resources for the endless test: neither human nor temporary.

After the GIS was put into trial operation, and then into the industrial one, a new task appeared - handling user incidents. 

Before a full-fledged GIS user support service was created, the first hit of user requests was met by the testing team, as the most immersed in the details of the system, trying to combine the main tasks of testing new improvements, as well as timely process incoming incidents that were superimposed by SLA.

The testing team has not encountered such a task before. The flow of incidents needed to be processed, systematized, localized, corrected, verified and incorporated into the new release cycle. 

The level of understanding and maturity of the operation process grew and improved as the system itself grew. 

I have listed only part of the features that the testing team encountered while working with GIS.

Chapter 3. Problems of testing GIS and methods for solving them. Recommendations to test managers of teams.


In the process of the testing team working on several large GIS, we came up with recommendations for test managers. Later they were transformed into the methodology of the testing process of such systems in our department and continue to improve when solving new problems in projects of a similar scale.

What to do when a lot of functionality?


Do not panic and break the functional into blocks / modules / functions, connect the analyst to the audit of the result, make sure that the vision of the functional blocks is correct. 

We recommend to make:

  1. .

    , , , , production. , .
  2. //.

    , , — ,   ///. , , , - , , , / // , , « » , .

What to do with matrices of knowledge and coverage of functional requirements?


Functionality does not become smaller. Now there is still a lot of it, but it is in a new representation (in the form of a matrix). It is necessary to determine which functions are the most important from a business point of view and what cannot be provided to users in a raw form. So begins the prioritization of functionality. Ideally, if the analyst helps the tester in this. At a minimum, he will appreciate the correctness of the priorities placed by testing.

The most important functions / blocks / modules will be assigned a high priority for the test, the less important ones will be covered by the second priority tests, the rest will be the third, or if the “deadlines are on”, you can postpone their test for a quieter time. 

Thus, we have the opportunity to test the functionality that is really important for the customer. We put things in order in a huge number of functions, we understand what is covered by tests, and what remains to be covered, we know that inside it is necessary to strengthen testing, in case of illness / dismissal of responsible testers, we understand to which of the testers the teams should pass improvements to the test (in correspondence with the knowledge matrix), what new interesting functions / modules / subsystems can I offer conditional Vasya, Pete, Lisa, when they are tired of testing the same thing. That is, I got a visual tool for motivating testers who want to learn something new on the project.

What to do when requirements do not support historicity, are confused, duplicated, how do testers work with this?


We recommend implementing the requirements testing process on the project. The earlier a defect is discovered, the lower its cost.

Testers distributed by the knowledge matrix, upon the readiness of technical specifications for development, immediately begin to study and verify them. In order to make it clear to everyone what mistakes are in the requirements, a set of rules for analysts “Recipe for Quality Requirements” was developed, according to which they tried to write requirements, and templates for technical specifications were created to describe them in a single style. Rules for the format of technical specifications and recommendations for the description of requirements were issued to testers to understand what errors to look for in the requirements.

Of course, the main task of the tester was to find logical inconsistencies or unaccounted moments of influence on related functions / subsystems / modules that the analyst could miss. After detecting defects, they were fixed in the bugtracker, assigned to the author of the requirement, the analyst stopped the development and / or chatting with the tester and the developer informed that the condition would be amended in accordance with the defect (so as not to slow down the development), make corrections and publish the corrected version requirements. The tester checked and closed the work on the defect to the requirements. This procedure gave the team confidence that after a couple of weeks of development, the detected problem would definitely not come up in the test.

In addition to early detection of defects, we received a powerful tool for collecting metrics on the quality of analysts' work. Having statistics on the number of errors in the requirements, the lead analyst of the project could take measures to improve the quality of work in his group. 

What to do when it is necessary to carry out load testing?


It is necessary to study the requirements for the load, come up with its model, develop test cases, coordinate the test model of the load with the analyst and develop load scripts with the involvement of competent specialists in load testing. 

Of course, you can’t guess the load with the test model, but for a more accurate hit, in addition to the analyst, you can attract an architect or DevOps specialist who, after analyzing the information, statistics, metrics, can tell what other cases are needed in the proposed load model.

It is also worthwhile to implement the process of running load tests, taking the load results and passing it to the developers to eliminate bottlenecks.

The load process should be carried out regularly before the release of each release, periodically change the load model in order to identify new bottlenecks.

What to do when it is necessary to conduct integration testing in which there is no experience?


There are basic ways: for example, you can take advanced training courses on the subject of Rest API testing, read articles on the Internet, get an exchange of experience with colleagues via Skype, with a demonstration of the process, hire a specialist who is well versed in Rest API testing to the testing group. 

There are many ways to immerse yourself in this type of testing. An experienced specialist was hired in my team, who in the future trained me and the entire testing team, developed manuals: what to look for when testing the Rest API, how to draw up a test design to verify integration, conducted webinars with a demonstration of the testing process for the whole team. 

We came up with test tasks on which everyone had the opportunity to practice and immerse themselves in this process. Now the material that has already been developed over the years is only improving, and the process of learning and diving into Rest API testing takes 1-2 weeks, while earlier it took a month or more to dive, depending on the complexity of the project and the volume of the test model. 

Source

How not to get confused in code branches, stands, deployments and test the necessary code?


While GIS is at the initial stage of development, there are only two branches of code: master and release. The second is separated at the stabilization stage for conducting final regression tests and correcting point defects of the regression.

When the release branch was sent to production and the next iteration of development began, at some point we decided to parallelize the development of new tasks so that the larger task planned through the release could be completed on time. At some point, there were 3-4 or more such branches. There are more than three test stands deployed with the goal of starting testing testing of future releases as soon as possible.

Testers are sure that the infrastructure specialist installed, for example, revision No. 10001 on one of the test stands, performed everything correctly, and they can start the test. The infrastructure specialist performed deploy from the code branch, reported that the stand was deployed, the code was installed, and it could be tested.

We start the test and understand that:

  • there are errors that have already been fixed;
  • the functionality from the existing block differs significantly from the similar functionality that stands on another test bench and is preparing for the next planned release, while there should not be any modifications to it within the transferred code branch;
  • we begin to register defects, the developers return them, the holivar begins in the design chats and figuring out what we actually installed and why not what we expected.

We conducted an analysis and found out that the developers did not give the infrastructure specialist instructions on which branch to deploy from, the employee collected from the develop branch, and the developer managed to merge only part of the code from the feature branch into the develop branch. 

The tester, who did not understand the branch management at all, got the task and a link to the stand, ran to test, spent time, got a lot of defects, most of them were irrelevant due to all this confusion.

What we did to avoid similar situations in the future:

  • the developer prepares instructions for the infrastructure specialist indicating where to deploy deploy from, the instruction is passed through the task to jira;
  • the infrastructure specialist is not confused and does what he was given;
  • GIT, , jira ;
  • jira : 

  • Gitflow , , hotfixes develop,  .


, ,   ?


We recommend that you draw up a testing strategy in advance, but since you missed this point, my experience will probably come in handy.

First, you need to understand which browsers are specified in the requirements. If you have decided on this, but there is absolutely no time, we look at the statistics of the most commonly used browsers, for example, here . Then we try to reach three or five of the most popular browsers.

Since the project is large and the testing team was large, it was physically possible to allocate one popular browser statistic for each tester. He conducts his regression cases on a dedicated browser version, special attention must be paid to layout, clickability of buttons and links. This process looks like this: for example, there are 100 scripts for a regression test, the team has 5 testers, each can take 20 scripts to work, each is assigned a browser. For one regression run, each tester checked their cases in one of the browsers. Coverage in the end is not complete, but since many scenarios are still repeated to one degree or another, the percentage of coverage increased due to the passage of part of the regression scripts by different browsers. 

Of course, this did not give 100% test coverage of all the functionality, but it allowed to significantly reduce the risks of cross-browser defects getting into production according to the main business scenarios in the system.

Further, not only on the regression, but also on the test of improvements and validation of defects, we performed checks on different browsers, expanding the coverage of cross-browser compatibility.

In the future, they began to apply the approach with distributing testers by browsers on the refinement test, without waiting for the regression testing stage, thereby further increasing the percentage of tests covering different versions of browsers.

What we got:

  • optimized testing costs, both financial and time, for one interval of time we checked both the regression test and the cross-browser;
  • , Severity;
  • , , .

?


Quite quickly, we had a question about running tests in a single repository, keeping them up to date and about the ability to run test runs with marks on the result of the execution.

The team included employees with experience with the TestLink test management system . This is the only open source test case management system, so it was chosen to work. In this system, a very simple graphical interface and design without unnecessary frills. We quickly filled the program with tests, the question arose of how to maintain it. At first, a lot of resources were spent on updating the cases for each revision; this option turned out to be inoperative.

After consulting with the analyst and the testing team, we decided that it was not necessary to always keep such a large test model up to date due to the costs of its support. 

All cases were divided in accordance with the matrix of functional requirements into folders, each functional module / subsystem stored a set of cases in a separate folder. This allowed us to visually structure test cases. Keywords were created in TestLink, with the help of which the case belongs to a particular group, for example:

  • smoke - used for test cases included in smoke test ( performing a minimum set of tests to detect obvious defects of critical functionality );
  • auto test - used for test cases by which autotests are developed;
  • Priority 1 - used for test cases related to business functions labeled Priority 1.

As a result, a test design is always designed for new improvements, as a result of which a checklist document appears. In it, the checks are prioritized, and only part of the checks falls under “Priority 1” or smoke and regression test cases are already created on them in the TestLink system.

How to always have an actual regression case model for a planned release and a sudden HotFix on production?


Before the start of the regression test, all preparatory work, including updating or adding new cases to the regression, was completed. And this means that if you run a test case run relevant for the new release, they can lead to defects when checking HotFix on such test cases. 

HotFix corrections were made on the old code branch (last release) and changes were made to the code by fixes of defects, while current test cases could have been modified from the improvements of the future release. That is, running test cases relevant for a future release could lead to the registration of false defects and delay the release of HotFix.

To avoid registration of false defects and disruption of the HotFix testing deadlines, we decided to use a mechanism somewhat similar to maintaining code branches. Only merging and updating cases between branches (read “folders”) of TestLink was carried out manually by testers according to a certain algorithm, whereas in the Gitflow model this is done automatically by Git.

Here is a set of test cases in TestLink:


The process of updating cases in TestLink was invented

  • The test manager copies the folder with cases "Test Project 1.0.0" and creates a new test suite, which is named the number of the next planned release. It turns out the folder with cases "Test project 2.0.0."
  • After studying the improvements for a future release, test cases from the “Test Project 2.0.0” folder are analyzed for the need to update them for new improvements.

If necessary, update cases:

  • the responsible tester for revision makes changes to the test case in the set “Test project 2.0.0”;
  • if you need to delete a test case, first you need to move it to the “Delete” folder, this is done in order to be able to recover some accidentally deleted test case or if the requirements are returned back and the test case is again in demand (the test cases only from the folder corresponding to the test suite of the future planned release, in which this test case will not be relevant);
  • if we add a test case, then this needs to be done only in the folder corresponding to the test suite of the future planned release;
  • test cases that change are marked with the keyword “Modified” (necessary to evaluate the metric of the degree of influence of improvements on the regression functional);
  • the cases that are added are marked with the keyword “Added” (necessary for evaluating the metric by the effect of improvements on the regression functional).

Thus, we always have an actual test suite of cases corresponding to the previous release version of the system and use it for the HotFix test, as well as work on updating the new test suite, preparing for regression testing and the process of stabilizing the new planned release. At some point, at the same time, 3-4 test branches (read “folders”) of TestLink, corresponding to different versions of the system, can be obtained at once, which is especially important when testing improvements in the feature branches.

After each release, we can estimate how much our regression model has changed, based on the “added” / “changed” labels. 

If the regression model increases very much, while the volume of improvements in the release has not changed significantly compared to the previous release, then this is an occasion to think about the correctness of setting priorities in the checklist of revision checks. Perhaps the tester made the wrong choice of cases for recourse and it is necessary to take measures: for example, explain to the tester the principle of prioritizing, involve the analyst in prioritization, change the resulting regression model by removing redundant test cases.

How can the regression test model be optimized?


We started working with a regression test model, optimized the process of developing test cases of regression by highlighting priorities and including only “Priority 1” cases in the regression. Faced with the fact that the test model, after a while, became large, the costs of running its cases increased, and we stopped falling into the time interval acceptable for conducting a regression test on the project.

The time has come to implement testing automation, the purpose of which was:

  • reduce time to complete regression test cases;
  • use auto-tests to create preconditions for performing subsequent checks, thereby optimizing the time and human costs of creating test data;
  • to improve the quality of regression testing by eliminating the influence of the human factor on the results of a manual test;
  • , .

A framework was developed for automating GUI tests in Java (GIT was used as a source control version system).

A separate automated testing team was involved in the development of autotests, which successfully coped with the task. For future projects of a similar scale, in the future it is planned to apply existing developments and launch automated testing at the start of the project in order to benefit from its use as soon as possible. You can read more about the automation of testing large GIS in an article by my colleagues who were directly involved in organizing and conducting automated testing.

On the part of functional manual testing, the regression model was also optimized. 

Using two large GIS as an example, the team and I came up with and implemented testing sessions or test tours, the essence of which was as follows: it was necessary to analyze the business process in each subsystem and think over the session (tour) of checks passing through this business process, simulating the most Frequently performed user actions on the process. 

On one GIS project this was called “test session”, on another it was called a “test tour”. But the essence remained the same - we thought through the end-to-end (through the whole revision) key business scenario that completely covers the business idea of ​​the implemented revision (there may be several such scenarios). 

The scenario of the test tour was agreed with the analyst, detailed regression test cases were developed and, in cases where they did not manage to conduct a regression test on the entire test model, they could limit themselves to conducting a “regression session” or “regression test tour”, which, as a rule, took less time and made it possible to clearly understand whether there are problems with key business processes in the system.

In the future, test tours were covered by auto-tests, and testers freed from routine checks switched to testing improvements of the next planned releases. 
An example of a test tour: “creating, editing, publishing and annulment of an entity”. 

A test tour can be complicated, for example:

  • give rights to create, edit and annul,
  • create an entity in the "Personal Account" of the user with the role of "Specialist",
  • ,
  • ,
  • ,
  • « » «»,
  • , .

SLA?


I recommend not to treat the process of localizing incidents from users as a low-level task. You should take this as part of the testing process. In addition, this is a much more creative process than, for example, checking on test cases. It is necessary to apply logic, the experience of test design technicians, in order to get to the bottom of the error, to catch it and pass it into development.

Of course, it is desirable to organize the GIS operation process with three levels of support (ideally) and, as a result, the most unobvious incidents that often only testers are able to localize will fail on the testing team, which are already filtered out on the first two lines.

To comply with SLA, we recommendmake the incident localization process a duty in the testing team with the highest priority and try to introduce optimization methods so that the incident playback speed is as high as possible. 

To optimize the time spent by testers, you can:

  • maintain a project knowledge base with typical or frequently encountered SQL queries;
  • organize the process of ranking incoming tasks in the bugtracker so that on the indicator panel the tester immediately sees a fallen incident and takes it to work in the first priority;
  • add countdown time counters in JIRA for tasks that have SLAs;
  • set up an alert system for incidents;
  • production ( — ), , , , , , production;
  • « » « ». . 

About the "knowledge matrix" was written above. As for the “responsibility matrix”, this is a table in which, by analogy with the “knowledge matrix”, functional blocks / modules / subsystems are written out and which of the testing group is responsible for testing the functional, as a rule, this is the team lead or senior / lead tester in a group.

What if the tester of one functional block / module / subsystem does not understand the whole picture of the business process on the project?


This is a sore subject that we have encountered in several large GIS projects. The team made a “knowledge matrix”, testers conducted a self-assessment of the degree of immersion in the functional and assigned themselves to their piece of functionality. But at some point, experienced testers who participated from the start of the project dropped out of the group, and new specialists were not yet immersed in all business processes and did not see the full picture. This led to the fact that when testing cases in one module, the results of this case should have been used in the next module, and as a result, if incorrect results were supplied to the input of the second module (preconditions were not ideal for executing cases from the previous module), then it was necessary to analyze the situation and log errors.

But testers did not think about why such numbers came to their input and simply worked out their cases. Formally, the test was carried out, everything is fine, no defects were found, and when the analyst accepts the functional or when preparing for the acceptance tests, significant problems in the work of business logic that were missed on the test are clarified. The reason was a lack of understanding of the end-to-end business process performed by the system.

In this situation, the following was undertaken:

  • immersed in the functional with the involvement of the analyst;
  • training was conducted in the testing group, exchange of experience, stories at rallies about its subsystem and what is happening in it, discussion of new improvements that are planned for the subsystem in the next planned release;
  • attracting analysts and introducing informing templates into the specification specifications about the degree of impact of improvements on third-party modules / subsystems;
  • the implementation of the testing process for test sessions (tours), which are testers and coordinate them with analysts (helps to reduce the risks of misunderstanding the business process by the team and the number of business errors in the system).

Fuh! I tried to collect the main problems and recommendations for their elimination, but this is far from all the useful information that I want to share.

Chapter 4. Metrics for determining the quality of the project and the methodology for assessing the labor costs for testing


Before introducing the collection of metrics on the project, we asked ourselves: “Why should we do this?” The main goals were to monitor the quality of the testing team, the quality of the release produced in production, and monitor the performance indicators of the participants of the testing group in order to understand how to develop the team.

An analysis was carried out which metrics are necessary to achieve the goals. Then they were divided into groups. Then they thought about what can be measured without additional changes in the process, and where help from other project team members will be needed.

When all the preparatory stages were completed, the regular assembly of metrics began: once a month / release / sprint / quarter - depending on the project and the characteristics of the production process.

Having collected the first metrics, it was necessary to determine the target indicators that we want to strive for at this stage of the project development. Then it remained to regularly take metrics and analyze the reasons for their deviation from target indicators, take measures aimed at improving indicators, that is, to optimize not only the testing process, but also the entire production process on the project.

Of course, not only testers were involved in improving the quality, analysts and the development and release manager were also involved in optimizing the process, DevOps engineers were all key participants in the process, since everyone wanted to improve the quality of the release and improve their work. 

An example of how the collection of metrics and targets on one of the completed projects looks like:


Methodology for assessing labor costs for testing


In order to inform the project manager of more accurate deadlines for completing testing, based on the collection of metrics from similar projects, a methodology was developed for evaluating the testing effort, which allows the most accurate reporting of the completion dates of the tests and notifies about the risks of testing.

This technique is used on all GIS implementation projects, differences can only be in some metric values, but the calculation principle is the same.

Metrics used to perform a detailed assessment of testing costs


Time metrics are obtained by repeated measurements of the actual costs of testers of different levels of competence on different projects, the arithmetic mean is taken.

The time to register an error is 10 minutes (the time to register 1 error in the bug tracker).
The time to validate the error / refinement is 15 minutes (the time to verify the correctness of the correction of 1 error / refinement).
Time to write 1 TC (test case) - 20 minutes (time to develop a test case in the TestLink system).
Time to complete 1 TC - 15 minutes (time to complete checks on a test case in the TestLink system).
Time for a test. The total time obtained by adding the costs in the Checklist for the column "Lead time, min."
Test Report Time - 20 minutes (time for writing a report according to the template).
Time for mistakes . Planned time for registration of all errors / clarifications, (time for registration of 1 error / clarification * possible number of errors / clarifications (10 errors for revision - the estimated number of errors per revision)).
Total time on DV (defect validation) . Planned time for validation of all found and corrected errors / refinements (time for validation of 1 error / refinement * estimated number of errors / refinements).
Test data preparation. The time for preparing test data is calculated subjectively based on the experience of testing similar tasks on the current project depending on different parameters (the scope of the task from the point of view of the Test analyst, the competencies of the code development team (new unknown team or proven team for which there are statistics on the quality of work) , integration between different modules, etc.).

By measuring the actual costs of one of the projects, the following was calculated: 

  • no more than 1 hour per task up to 60 hours of development,
  • no more than 3 hours per task up to 150 hours of development,
  • no more than 4 hours per task up to 300 hours of development.

In special cases, the planned costs for the preparation of test data could change in agreement with the test manager.
 
Time to write TC . The time to write the TC, which is estimated after the checklist is ready for inspection and prioritization for testing. For the regression test, TCs were marked with Priority 0 (the number of Priority checks 0 * 20 minutes (writing time for 1 TC)).
Time to regress according to TC. Time to complete one iteration of regression testing according to TC in the TestLink system (number of TC * average execution time of 1 TC (15 minutes)). 
Risks 15% of the time for the test is laid (risks mean all manual operations, stand falls, blocking problems, etc.). 
Total time for testing.The total cost of testing for one HL (preparation of test data + test execution + time for registration of errors / clarifications + validation of errors / clarifications + time for regression according to TC Priority 0 + risks) in h / h.
Total time for the task. Total costs for the entire testing task, figure in h / h (Total time for testing + time for report + time for writing TC).

All these metrics are used on the project to solve various tasks related to planning, work evaluations, both temporary and financial. As practice has shown, such an estimate gives a minimum error (not more than 10%), which is a fairly high indicator of the reliability of the estimate.

Of course, you may not use any metrics or your metrics according to statistics can differ greatly, but the principle of estimating the cost of testing work can be applied to any project and choose the most optimal calculation formula for your project and team.

Chapter 5. Recipe for a Successful GIS Testing Process


It is important to show test managers and testers that when faced with difficulties and new tasks, you can find solutions, optimize the testing process and try to apply the accumulated experience to future projects. 

I prepared surprises for all readers - a recipe for a successful GIS testing process and document templates that you can download and use on your projects.
So, the recipe on how to make the process of testing a large information system successful, and what we recommend to include in this process, I will try to state it briefly and concisely.

From the analytics process:

  • Implement technical requirements templates
  • implement the rules for the development of technical requirements in a group of analysts;
  • to develop a regulation on notification of readiness of the technical requirements of the project team.

:

  • - ;
  • ;
  • ;
  • :

    ○ - ;
    ○ -;
    ○ ;
  • , , , , , ;
  • , , , , ;
  • ;
  • ;
  • ;
  • ;
  • (, , ..);
  • , , ;
  • use the recommendations of more experienced colleagues, developments from other projects, ready-made cheat sheets , conduct brainstorming sessions with the team and look for new methods for optimizing and improving the process.

Now I'm just getting ready to test the new GIS. This is what my working Wiki looks like, which already takes into account many points that we recommended to do:


Surprise for patient readers.


If you read the article to the end, you deserve a gift. I want to share with you useful templates that you can use in your work:

  • the checklist template , which includes the minimum set of recommendations for testing interface elements of screen forms (of course, there are wider options for checks, this is just an example), includes formulas for calculating the costs of testing with explanations of the calculation method;
  • test report template ;
  • matrix template : knowledge / distribution by browsers / platforms / vacation schedule of the project team;
  • A list of key metrics for the project with explanations.

I hope our recommendations, examples, ideas, links and my templates will help many teams competently build the testing process, optimize their costs and successfully cope with the tasks in a responsible and complex project. 

If you want to join the LANIT testing team and participate in the GIS testing, I advise you to see the vacancies of our company.

Come to our Testing Schools!
, , .


I wish you all interesting projects and good luck!

PS It really begs to conduct a small survey. 

All Articles