Often in agile development projects quality assurance process has weak and at times unnecessary performance and quality measurement indicators. The question of how and when to measure quality in software development is asked frequently. Lack of thorough analysis of quality characteristics and lack of requirements set to reflect those characteristics can lead to false answers using pointless metrics and ending up with poor quality product.

In this post we will go through seven software quality performance indicators and analyze benefits they present the team and eventually how it could improve the overall quality of the product.

I would like to start the list with developer domain quality indicators and then move on to quality assurance (QA) domain metrics.

1. Complexity metrics

Complexity metrics is a very important indicator. It represents the measure of code maintainability which is one of the software quality pillars. Long term efficiency of your development team is strictly linked to this quality factor and we need a proper way to measure it.

Most used complexity metrics are as follows: cyclomatic complexity, Halstead metrics, depth of inheritance, class coupling, nesting etc.

2. Build stability status

Measuring the days since the last red status (build failure) and consecutive red status days can be a very good indicator for the team that the build process is stable and whether there are regular issues with quality of the code even before it gets to system testing phase. This also could point to possible improvements in white box quality assurance practice within the team.

3. Code coverage

Probably the most used developer domain metric, but also a good way to stay up to date with the amount of unit tests covering the code. If the organization is focused on fixing issues as close to initial development as possible, then this measure will make sure developers are doing enough regression tests early, at the lowest level first, before it undergoes further testing. When using this measurement be sure to understand the difference between statement and branch coverage and what the code coverage tool results represent. I recommend going with branch coverage as it will show more quality coverage rather than just using statement coverage as the prime metric.

Complexity-metrics

4. Crash statistics

This is a great post release metric especially for mobile applications. Therefore gather the information based on released versions and track the robustness of the software; the overall amount of crashes and crash free users is a good metric to start with. There already are good tools out there that monitor the performance of the application and provide the necessary data and stats.

Visual example of fabric.io:

CrashStatistics

5. Major/Critical issue distribution

We turn to another indicator in support of catching issues early in development process or, as most call it, the “shift left approach”. Performing various testing techniques very early in the process in order to reveal defects as early as possible. By implementing this approach in your organization stakeholders can dramatically decrease bug fixing costs. But to monitor if what you’re doing in all QA stages is good, there are some defect tracking indicators to watch for:

  • major/critical issues found during post development validation stage will indicate how well white-box QA techniques are working;
  • major/critical issues found during release candidate testing stage will indicate how well black-box QA techniques are working;

Here are some visual examples on how it could look like:

Major-Critical-Issue-Distribution

6. Test automation proportion

Whenever we talk about regression testing it is crucial to think about automation to first of all significantly reduce release candidate manual testing time and secondly exclude the fact that manual regression tests become a very mechanical procedure and can result in human error thus missing critical defects. The measurement here is the ratio of automated tests against all defined test cases, preferably per feature category. Like that each time team has a development cycle planning, they can estimate how much automation work has to be done to cover regression for the changes in existing features. Each feature column should represent only high priority test cases that have a high risk of affecting user experience.

Visual example:

Test-Automation-Proportion

7. Time spent in release

Time invested in release candidate (RC) tests or pre-deployment validation phase is essential if the team wants to have regular releases, optimizing the development process the RC validation should decrease, so the team can monitor the performance of the whole process by watching the trend of this indicator. Of course the team can also set a baseline to measure against. I would recommend to start the countdown when the release has been cut and stop when there is a clear decision from QA department on release deployment.

Visual example:

Time-Spent-In-Release

We have followed through seven indicator groups that in combination could be a very powerful tool for early mistake identification in development process and how quality assurance process implementation is supporting business growth.

As a team when choosing the right metric always start with available data that links to measurement indicator and resources available for gathering it. Usually your team will likely pinpoint the probable bottlenecks in work process and this should be a good ground for defining a metric to monitor this assumption.

There is no common best metric. Teams that thrive for success should stick to “test, learn and adapt” and create their own combination of what suits best for them. Nevertheless hopefully these metrics will be a good reference point and will help in determining most suitable for your team.


Navigation