Testing strategies to build resilience in a chaotic world

Publish date:

Testing resiliency is the ability of a system to handle and recover from failures. Read on to learn how you should build your software testing strategies.

“Resilience or hardiness is the ability to adapt to new circumstances when life presents the unpredictable” – Salvatore R Maddi

In today’s world, system downtime is not an option. If a user can’t access an application once, chances are that they will never use it again. Resiliency, which in simple terms is the ability of a system to gracefully handle and recover from failures, thus becomes critical. Testing resiliency ensures the system’s ability to absorb the impact of a problem while continuing to provide an acceptable level of service to the business. In other words, to test resiliency introduce a defect and ensure that the system recovers gracefully. This concept was originally introduced by Netflix in the Principles of Chaos Engineering.

Netflix has defined the discipline as follows: chaos engineering is the discipline of experimenting on a distributed system to build confidence in the system’s capability to withstand turbulent conditions in production. This idea, which has proven to be very successful for Netflix, is now being adopted across industries. To design tests that fail and validate recovery requires that the test professional understand the architecture, design, and infrastructure of systems.

To build your test strategies for resilient systems, you should:

  1. Conduct a failure mode analysis by reviewing the design of the system. In simple terms, this means identifying all the components, internal and external interfaces, and identifying potential failures at every point. Once failure points are identified, validate that there are alternatives to failure. For example, let’s say it is a service-based architecture and if the application depends on a single critical instance of service it can create a single point of failure. In this scenario, verify that if there is a request time/out then an alternative is available.
  2. Validate data resiliency, i.e. that there is a mechanism for data to be available to applications if the system that originally hosted the data fails. Verify that the data backup process is either documented or automated. If automated, validate that the automated script backs up data correctly, maintaining integrity and schema.
  3. From an infrastructure standpoint, configure and test health probes for load balancing and traffic management. These ensure that the system is not limited to a single region for deployment in case of latency issues.
  4. From an application standpoint, conduct fault injection tests for every application in your system. Scenarios include shutting down interfacing systems, deleting certificates, consuming system resources, and deleting data sources.
  5. Conduct critical tests in production with well-planned canary deployments. Validate that there is an automated rollback mechanism for code in production in case of failure.

Above all, the key to testing resiliency is continuous learning of the design architecture and infrastructure of systems. The more you learn, the more you understand the points of failure and the better you test.

Related Posts

Business Services

Capgemini’s Avancer program lifts women up

Aarti Srivastava
Date icon November 27, 2020

I want to use my ten years of experience to help lift women so they can reach their career...

Consumer Products

Acing the returns flow: how to balance costs and value in the last-mile of returned goods

Wendy Coster – Haas
Date icon November 27, 2020

The main challenges of companies that are dealing with the ever-increasing amount of returns...


AI4Good: How governments can leverage AI for society

Pierre-Adrien Hanania
Date icon November 26, 2020

Pierre-Adrien Hanania provides an overview of the recent AI4Good UN Global Summit event,...