Skip to Content

Value-Based Test Automation and Metrics


The world is changing, so is the software industry. ‘Automation’ is no longer a fantasy but rather a necessity and soon becoming a culture. Automation of any kind has been highly encouraged and focused on since the past decade across all sectors. Various tools and technologies are constantly emerging, which are claiming high value through automation. Organizations are also ready to invest hefty amounts for these tools, considering the ‘potential’ benefits in a long run. The emergence of DevOps emphasizes automation need even further, where almost 100% automation is desired.

In this eager struggle for automation, we must find answers to a few fundamental questions such as, “Was Automation worth the investment ?”, “What were the benefits as compared to the system before automation?”, “Till what degree, did automation yield the value that it promised ?”

In essence, how good is our Automation and how do we measure its success?

Typical Metrics used in Test Automation

The paradigm shift from Quality Assurance to Quality Engineering has resulted in a focus on automation like never before.

Below are some of the common expectations from automation.

  • Save Time & Effort for the Testing team.
  • Detect the issues quicker, thus giving more time to fix and retest.
  • Provide coverage for Business-critical scenarios.
  • High accuracy i.e., ideally no human errors.
  • Develop once and utilize repetitively to achieve high ROI.

Since Test & Process Automation are becoming critical and even expensive when costly tools are used, their success parameters must be carefully defined. Several popular metrics are already in place that are used to measure the success/failure of Automation implemented. However, at times they may have certain limitations.

A few examples are below;

There are several other metrics such as Defect Slippage, Rejection index which are widely used to measure Automation success/failure. Since all these metrics are critical in driving future business decisions, it is obligatory to assess their calculations and customize rather than standardize them at the organization or project level to maximize their usability.

What is Value Neutral & Value-Based Testing?

It’s a common belief or rather an expectation, that testers are supposed to test the entire software as part of any implementation, migration, or new development. Automation is also not an exception to this belief, where a higher number of automated tests are demanded. The concept of “more is better” can sometimes deviate from the original value expectations from automation.

The ‘Value-Neutral’ mindset considers all the tests as of the same value and equal importance. Thus, the scope of automation can merely be finalized based on the automation feasibility. But just because something is ‘automatable’, doesn’t mean it should be automated. Thus, the success/failure calculations need more qualitative analysis.

Let’s dig a little deeper here.

Everyone would agree that creating and executing test cases that can provide 100% coverage for any software is highly impractical and often impossible. The test case count can grow to any limit, where there are restrictions pertaining mainly to time and budget.

If we apply the popular Pareto principle here, 80% of the value from testing can be achieved from 20% of tests, thus reducing the scope by a drastic margin. This would mean shortlisting the ‘High Value’ tests from the superset and focusing on the automation for these. The ‘ValueBased’ approach towards automation has its own benefits and can be immensely useful in several scenarios.

The value of Testing in any project is in finding out issues, performance glitches in the software and providing quality assurance when the software/product is ready to be deployed. Expectations from automation are a level higher, where the same tests are supposed to detect the issues quicker, thus giving more time to fix and retest. Also, the automated tests once developed, are expected to be utilized multiple times (ideally until eternity, practically for a prolonged duration). A Value-Based Test Automation approach can be helpful in determining the right automation candidates and measuring automation success through some additional metrics parameters defined in a subsequent section.

Important factors for Value-Based Automation

Theoretically, value-based automation seems quite logical, but several important factors need to be considered here.

  • The Human Factor :

Value is more than money. It’s above the arithmetic computations that define good or bad, success or failure. The human factor is crucial in Value-Based Automation or in general Value-Based Software Engineering, which impacts all the hierarchies and makes them responsible for their contributions. In Value-based culture, the management and leadership establish and live the values for the organization.

  • The Tools & Framework Factors :

Automation decisions can often be taken based upon a particular tool or automation framework.  It’s important to note, however, that tools are only the means to achieve automation and the original objective is to achieve business benefits and provide value to the organization. As the existing automation tools are constantly evolving and new ones are being introduced each year, there needs to be a fair and recurring cost-benefit analysis of the existing toolset against the alternatives available.

Automation Frameworks also play a pivotal role in determining the success of automation. A good framework can make automation tester’s life very easy, as well as steer the entire program towards success.

The selection of correct automation tools and frameworks are the most crucial decisions, which can decide the fate of the automation. If framework development is part of the project timelines, adequate planning is required. Incorrect tool/framework selection can drastically slow down the automation and in extreme cases can be the reason for automation failures.

  • The Investment Factor :

The Automation solution must provide a good Return On Investment (ROI) in a relatively quick time. The investment factor is important and is often tied up with the vision and goals of the organization. Investment in a high cost, but efficient & robust toolset/framework can yield good ROI over a longer duration. Open-source tools may lure with immediate ROI and cost-effectiveness, but they could become an obstacle in long-term growth plans, primarily due to lack of tool support and sometimes higher maintenance required.

Value-Based Automation Metrics

In the previous sections, we looked at the need for Value-Based Test Automation over the Value Neutral approach. Now, let’s consider a few metrics that can complement the existing metrics being used to measure automation success and take important business decisions.

  • Risk Exposure (RE):

This is not an unknown term in the business world. Risk exposure is a measurement of potential future loss due to a specific event. From an Automation testing standpoint, this specific event would be a test defect and potential future loss would be the impact of the defect is not resolved (i.e., Severity).

Risk Exposure = [Probability of Defect] * [Defect Severity]

The probability of defect will be driven by the feature being tested as well as the credibility of the developer. This is a crucial metric as it enforces focus on automating the ‘right’ scenarios, aimed to cover the areas of high defect probability.

  • Risk Reduction Leverage (RRL)

It’s important that the team would first identify the risks, and then automate the tests based on the risk priorities. Thus, the automated tests executed will give an indication of risks mitigated.

Although this sounds like automated test count, it’s not the same thing. This is because value-based analysis precedes the determination of which tests to automate, hence each test that is being executed counts towards the end value.

RRL is the metric that defines risk reduction upon executing the test in comparison with the cost or efforts that are involved in the preparation and execution of the test.

RRL = [RE before Testing – RE after Testing] / [Risk Reduction Cost]

  • Automation Gain :

Automation Gain is a commonly used metric that provides efforts savings from a particular automation suite.

Automation Gain (Hrs) = [EMTE for an Automation Test or Suite] – [Execution time of the same test or Suite through Automation]

Here, EMTE (Equivalent Manual Test Effort) is in simple terms time required for the test to be executed manually. Although straightforward, this is a crucial metric since the time saved per automated execution is a direct value add for the organization and customer.

The Automation Gain can be used to compare different tools/frameworks and can vary for applications under test.

  • Automation Utilization :

The automated tests are executed in agile sprints, software releases, in general within a predefined timeframe. One of the major objectives behind developing automation scripts is to ‘Develop once and utilize multiple times’. This metric captures the worthiness of a particular automated test in the perspective of original investment as well as future estimates for maintenance efforts.

A large number of automation tests may be developed in the beginning with the aim of having maximum coverage. However, as time progresses, many of the tests are not included in the execution, either because they’re no longer relevant or prove to be ‘heavy maintenance’.

To detect and act upon such instances, the Automation Utilization metrics can be of good help.

Automation Utilization = [The number of instances a particular automated test or suite is executed] / [Corresponding Development Efforts + Corresponding Maintenance Efforts]

The utilization can be computed over a time duration or even on a release level. Needless to say, the higher the value of this metric, the worthier the scenario or Automation suite has proven to be.

  • Automation Maintainability

One of the most important but probably undervalued metrics. It’s expected that automation should be easy to use, easy to scale, and should have low maintenance. A lot depends upon the thought process in the initial development, which drives this metric.

Automation Maintainability = [Maintenance efforts for Automated Test or Suite] / [Development Efforts for same Automated Test or Suite]

This metric can give good visibility on the maintenance overheads and can drive the decisions on reconsidering the approach or tool/framework selections.

  • Automation Reusability

Many automated tests have common steps and thus it’s important to design the automation strategy, framework, and scripts to achieve maximum reusability, thus lowering the cost and increasing the ROI.

Reusability can be within the same or different projects and can be achieved through multiple means such as reusable functions, parameterization, etc.

Below simple calculation can determine reusability achieved, which compares efforts required for automated test with reusability against the scenario where the same test had been automated from scratch.

Automation Reusability = [Efforts for Automated Test or Suite with reusability] / [Efforts for Automated Test or Suite without reusability]


There are several shortcomings of conventional Value Neutral automation testing thought processes and corresponding metrics. Value-based automation is one of the good solutions that can help overcome many of these.  This is not just a change in process, but more of a mindset change to look differently towards your automation solution.

The primary focus is supposedly on the value achieved from the solution delivered and establishing effective metrics to measure the value as precisely as possible. This is not a “One size fits all” solution but can be a thought-provoking one for those who are here in the automation industry to stay for a very long time.


  1. Value-Based Software Engineering – Barry Boehm
  2. Nondeterministic Coverage Metrics as Key Performance Indicator for Model- and Value-based Testing – David Farag_o
  3. Measuring the effectiveness of automated functional testing – Ross Collard


Nachiket Shembekar is a QET leader with 14.5 years of IT work experience in delivering Development & Testing projects. He has managed multiple testing projects under EUC and is currently playing the role of Engagement Manager for a US-based Agricultural customer.