Skip to Content

Top five topics for new-age quality engineering other than the budget

Capgemini
March 15, 2021

The World Quality Report (WQR) has been capturing the pulse of the QA community around the world for 12 years in a row now. No other industry document holds such rich historical data about how QA has evolved.  With more than 1,700 interviews across 10 sectors as well across designations from CIOs, IT directors, development and R&D heads, along with the QA/testing professionals, it presents a 360-degree view of the state of quality.

I have been actively discussing the WQR observations with my customers and fellow QA community members. In addition to one of the most important topics – budget – which I will discuss in detail in my next blog, we normally talk about the following top five topics. Let’s discuss them in detail and discuss their long- and short-term impact:

  • Narrowing the gap between developers and testers
  • Not to lose focus on the domain at the cost of engineering
  • Use of intelligent techniques for testing
  • Test Environment (TEM) and Test Data Management (TDM) are key for continuous testing, so focus on that
  • Automation as a platform not a capability

Narrowing the gap between developers and testers

With more focus on the agile and engineering way of doing things (commonly known as new ways of working – NWOW), the skillsets of developers and testers are more overlapping than ever. The Software Developer Engineers in Test (SDET) are expected to do more than just the functional testing. They are expected to develop unit test cases, and in some cases take care of the minor bug fixes and coding of enhancements as well. Most organizations evaluate SDETs using coding test platforms such as Codility, Coderpad, etc. before they are onboarded.

One of the largest banks globally is counting on this becoming a common practice soon to ensure that squad/scrum teams have more engineering power than the support functions. Typically, there are three to four developers/team members in the engineering function and the rest in the support function. With SDET doing development job as well, that number could go up to 5–6 in a team of 7–8. That is an increment of 25–30% of engineering capacity.

There is an acute shortage of the SDET skills in the market. The key is not having pure developers as SDETs; they need to have the flair for testing. Testing is their primary role, so there is potential to groom this talent from the university itself.

Not to lose focus on the domain at the cost of engineering

“There is a lot of focus on the engineering way of doing things. While there is benefit in adopting and following quality engineering practices, this should not be done at the expense of business expertise and focus,” says Dhiraj Sinha , a lead author of the WQR. He discussed this extensively in his blog here. I still strongly believe the argument that QA team brings the right domain context to the team, especially when the code is being stitched together. One can achieve 100% test automation, but the key question is, it is required? After a certain point, automation will provide diminishing returns. The domain knowledge will provide the right context to determine focus areas for automation as well as the prioritization. For instance, while it is possible to test all the currency pair combinations in a trading system, the domain knowledge of the tester would let you determine which are valid and used most.

With the lack of focus on domain in scrum and integration, my prediction is that we will see some improvement in the velocity of the teams. At the same time, there will be a surge of functional flaws that will start revealing during UAT, even in production. So, there needs to be a balance of engineering and domain skills.

As indicated above, we will see a surge in engineering talent as a result of SDET demand but we might lose the domain skills. In the next three to four years as digital transformation becomes mainstream, the need for functional testers will arise again and I am afraid that most companies will struggle to get the right domain experts.

Use of intelligent techniques for testing

In combination with the domain context, the testing team needs to start using AI and machine learning techniques to ensure that the domain focus is laser sharp to prioritize defect detection and built-in quality. The most common use cases for AI and machine learning for testing are:

a. Defect prediction: Predict where defects will occur and their severity using various algorithms.

b. What-if analysis: Simulate various “cause” conditions and find their “effects,”g., simulate environment downtime and understand the impact on timeline or simulate the code change in a module and understand potential defect prone areas.

c. Test case (TC) pass/fail prediction: Based on trend analysis and what-if analysis, one can determine if a TC will pass, fail, or it will be potentially blocked.

d. TC prioritization: Based on a), b), and c), one can prioritize which TC is to be executed first considering its potential to unearth the defect first.

e. TC optimization: Use natural language processing (NLP) to determine which TC are duplicates or whether the functionality is already covered in other TC thus eliminating test cases and optimizing the size of test pack. Test optimization can also be done using risk-based analysis. Risk-based analysis can be carried out based on various parameters, such as the potential of failure based on what part of code has changed, regression impact of code change, previous behaviors of the test artifacts for a given functionality, etc.

These techniques are to be used to identify and eliminate potential issues. While the above use cases are for test design and execution, AI is also used in test automation, especially:

  1. Identifying the UI changes
  2.  Naming the object repositories and creating intelligent reusable function, etc.

If one really looks at commercial testing tools, every tool will claim tob have AI capabilities and some or most of the use cases for test design, execution, automation, etc. are built in.  Capgemini’s SmartQA™, Smart Foundry solutions effectively uses the underlying AI and ML engine. So, in the future, AI will be in every aspect of testing, but the differentiator will be how accurate the algorithms are and whether they really add value and confidence to the entire ecosystem.

TEM and TDM are key for continuous testing so focus on that

Continuous integration has been here for some time and is now a more mature practice. Continuous deployment is the still in the nirvana state, where everything is continuously developed, tested, and deployed in an automated fashion.

Test Environment Management (TEM) and Test Data management (TDM) are key topics highlighted in the last few WQR reports where significant progress has not been made. The good news is that this has been acknowledged as a key concern. However, it is an area that needs attention. Cloud has provided an excellent platform for building and provisioning environments on the fly, but the point of contention really is, “is the environment fit for purpose?” A fit-for-purpose environment means one that has the right configuration, size, data, integrations, etc. for a given testing need. For example, integration testing test environment needs are different than system test vs performance testing.

Typically test environments do not get the same attention as production environments due to cost and criticality difference, but they can leverage some of the best practices and tools used in the production. Based on multiple engagements I have managed, and  observed , I notice there are a very few people or most cases no one understands the end-to-end environment map. Most of the team understands their application and immediate neighbors, but nothing beyond that. That’s why it’s so important to create an end-to-end environment map with configuration details and compare that with production. With test environments now moving in cloud, this might be a simpler activity than the traditional on premises as well as legacy environments.

Test data management is the new snake oil. With various regulatory compliances, data access has become a critical challenge – especially when the teams are not on shore. The TDM world has three parts: (a) test data management, which identifies the right data required and governance around it, (b) synthetic data generation, and (c) data masking. Although there are multiple tools available in the market which can do synthetic data generation or marking, I believe they all have significant challenges in terms of implementation in the real-life scenarios. There are no plug and play test data management solutions for any domain or product. The data team needs to spend a significant amount of time in configuring the solution, understanding the data structures and data relationship across the application estate to use them.

The future for TEM and TDM is intelligence based on context. For example, the environment needs for SIT are different than ST. While ST will leverage more virtualization, SIT would need actual integrations. The systems will be able to automatically configure so that it can minimize the cost of environments and also provide the right set of synthetic and masked data for the given context.

Automation as a platform not a capability.

Intelligent automation is a buzz word in the industry. It combines functional automation (testing, process) with AI and ML. Combined with test data and test environment automation, it enables automation to learn, think, act, and adapt the environment in which it operates. Test automation, especially E2E test automation, is a key challenge because of multiple technology stacks and integrations, automation tool compatibility, etc. With NWoW, automation is a default activity and not an afterthought. Because automation is seen as a capability, a large pool of automation testers is now available. However, the independence of the various agile teams (scrum, pods, tribes) brings challenges in terms of tools used, automation framework, and techniques.

While there is a surge of automation activities, across teams they may not built assets that can be plug and play. The individual automation frameworks, tools, utilities might be effective in their own applications or teams, but when we try to merge them to do E2E tests these automation assets may not be as effective or may not even integrate seamlessly. To avoid this, a common platform is required that will bring various automation tools and frameworks together to share information more easily.  It will also provide a jump start for any team to start reusing the automation assets created across enterprise instead of starting from scratch. This will also able to accelerate build of application agnostic intelligent automation assets and reusability.

In summary,  testing teams need to build platforms that will bring various tools, utilities, AL/ML techniques together along with lifecycle automation (business process, functional UI, API, test data and environment) offer the best coverage across code and functionality in the most efficient manner.  For all those test professionals reading this blog, I would suggest, although testing will become a highly engineered organization in the new ways of working, do not lose sight of domain. That will make you more effective and efficient.

In my next blog, I will touch upon the change of testing strategy with advent of cloud and micro services architecture, APIfication and emergence of AI enabled systems.