Innovations in Flexibility

Publish date:

As change in business and IT accelerates, effective testing using proven methods becomes more vital. It also becomes more complex. So, meaningful innovation in testing is not about changing what testing is but how testing is done: the organization, environments, processes and tools that enable it to achieve better products, closer alignment with business, shorter […]

As change in business and IT accelerates, effective testing using proven methods becomes more vital. It also becomes more complex. So, meaningful innovation in testing is not about changing what testing is but how testing is done: the organization, environments, processes and tools that enable it to achieve better products, closer alignment with business, shorter time to market and lower cost.
Ever-growing need for these achievements means companies can’t afford not to invest in them. But that can be a risky investment because it is very unwieldy. It is hard not to buy things you will not need along with those you will, and it takes a long time to find out which are which.
The need is for ways to invest in quality as, when and in quantities needed rather than try to build your own quality factory, which is itself vulnerable to change: to use a further analogy, you want to pay for the electricity you use, not buy exploration rights to a possible oilfield. Thus, difficult and dangerous capital expenditure is replaced by operational expenditure, making ROI much more likely and much faster.
Many strategies and offerings to achieve pay-as-you-go testing have emerged in recent years and been documented and analysed, critically as often as not, by PT. Their novelty is also their problem. They tend to emphasize ideas and technologies over trusted principles. Many are also narrow: designed to solve a particular technical or practical problem and hard to fit with wider organizational and business strategy.
Here’s my view on the most important categories, and what is needed to make them work.
The cloud for testing
Cloud computing is currently the most exciting thing in testing, as evidenced by frequent articles in industry publications like Professional Tester, sharing new ideas on using it to solve long-standing testing problems almost at a stroke. The potential seems limitless, with standout examples including instant creation and decommissioning of test environments with advanced simulation of difficult components such as mainframes and external services; SaaS, fully-integrated and connected versions of familiar, trusted tools; and effectively unlimited power on demand for heavy lifting tasks including load generation and results analysis.
Most of the concerns holding back uptake of cloud for operational business do not apply to testing, although there are still some caveats. It’s vital that the providers understand the special requirements of testing, particularly the need to keep tests repeatable and maintainable and assets secure. Successful, safe use of cloud requires appropriate architecture including privacy when needed; dependable, fully-supported cloud implementation of industry-standard tools already embedded in the test organization; and testing-grade low-level configuration management including full replicability.
Testing centralization

Concentrating test functions within a single organizational entity, which then provides them to the rest of the organization as a service, can improve utilization. It also promotes standardization and efficiency because the service, rather than the suborganizations that use it, increasingly selects and defines the characteristics of processes and artifacts. That in turn leads to more sharing of knowledge, as each user sub-organization benefits from what has been learned by and from the others, passed on via the centre. Accessible, interoperable and reusable intelligence is accumulated. This concept is also commonly called ‘testing centre of excellence’ or ‘factory model testing’.
The definition of common processes and artifacts takes considerable effort, and getting it wrong could cause the exact opposites of these desirable effects. Also there is no guarantee the service will always be busy: provided it achieves user buy-in it will certainly yield better utilization than distributed testing effort, but how much better depends on the unpredictable progress of projects.
For the same reason, there is risk of it becoming too busy: estimating the needed capacity in advance is an unenviable task.
Successful centralization needs the knowledge and guidance of people who specialize in it. The most difficult parts are not domain- or organization-specific and their careful, expert planning will speed progress and prevent disastrous mistakes.
It’s essential to achieve a high level of standardization of input, process and output. A central service that has to change the way it works for different users is worse than not centralizing. Using proven, generic industry standards such as TMap® and TPI NEXT® is the easiest and most effective way to achieve true harmonization.
The centralized testing service must be designed to scale up and down quickly as needed, using external resources and infrastructure.
Testing outsourcing
Understanding the challenges of centralization, it might be thought a logical progression to instead make use of an ‘external testing centre of excellence’. Much of the setup effort is bypassed since the service is already available off-the-shelf, and scalability is the responsibility of the provider.
The dangers are well known and documented. Misunderstandings of the agreed inputs, outputs and processes can occur on both sides. Visibility (including of how scalability is actually being achieved) can be poor. The motivation dynamic of the provider can suffer if problems are encountered, squeezing its margins. Finally, over time expertise is lost from the user organization: it does not learn from other users because they are different commercial enterprises but becomes dependent on the supplier, creating vulnerability to change of circumstances.
Successful and safe outsourcing needs a great deal of attention to service definition with formal, strongly validated test bases (e.g.requirements). Collaboration must beembedded deeply at technical as well as business level, between the user’s and provider’s testers; so that in effect they become one organization. That is why the best testing service providers are those with a strong background in both test consultancy and domain knowledge.
In these ways truly agile working can be achieved. Because every individual has the necessary instant access to tools, assets and specialized expert assistance, all can switch seamlessly and instantly between activities as the project requires.
Innovation in testing is very challenging. All its initiatives have risky implications, sometimes unforeseen, in other areas. Progress that is sufficiently joined-up to be safe is beyond most test organizations and rightly so, because their testers should be using their expert domain, business and product knowledge for testing, not for burdensome strategic decision making and administration of human resources, tools, data and infrastructure.
The TPaaS (Testing Platform as a Service) model provides all the necessities identified in this article, ready to fit seamlessly with your existing processes: managed testing-grade secure cloud infrastructure; full SaaS implementation of market-leading Testing Tools; and resourcing, personal collaboration and support.
To find out more, visit

Related Posts


Site reliability engineering

Genesis Robinson
Date icon August 7, 2020

Due to the current state of how we monitor, alert, and log our digital ecosystem, it takes...


Building a culture of quality transformation

Deepika Mamnani
Date icon December 20, 2019

Transforming from traditional testing organizations to quality engineering organizations with...


Zombies, wizards, werewolves, and a test automation silver bullet

Grant Volker
Date icon November 21, 2019

Expectations of technology have dramatically changed over the years, creating a demand for...