(1) Microservices Architecture —What is it?
Microservices represent a software architecture style in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small building blocks, highly de-coupled and focused on doing a small task, facilitating a modular approach to system-building.
Microservice architecture has emerged as a popular response to the shortcomings of traditional monolithic applications and found its place within the DevOps lifecycle. Microservices come with their own set of complexities and concerns. In particular, testing microservices-based applications requires new approaches to confirm proper operation and continued availability under heavy loads following a resource failure.
(2) Testing in an Agile/DevOps lifecycle using microservices architecture
To get the most value from unit tests, microservices tests need to be executed frequently: every build should run them and a failed test implies a failure of build. Microservices testing should be configured in a continuous integration server (e.g., Jenkins, TeamCity, Bamboo) so that these services are constantly monitored for changes in the code.
The following testing approaches within the Agile/DevOps lifecycle use microservices architecture to provide optimal coverage methodology:
- Unit Testing: this covers basic tests for the API and within each service. Code that uses other services should be mocked and stubbed.
- Integrated Testing: this is to test services from one application to another. Before starting a service integration test, test versions of the services that will use given requests should be created.
- Performance: a test system that reflects the production system should be created. This means creating test versions of all the services under test. Volume simulated for performance should be representative of actual usage.
- Exploratory testing: this ties all the services together from an end user perspective.
- Contract Testing: tests the boundaries of the external services to check the input and output of service calls and validates whether or not the service meets its contract expectation. Aggregating the results of all the user contract tests helps the developer make changes to a service without impacting the user.
The figure below depicts how microservices are unit tested. Microservices testing encompasses all tests related to one microservice in isolation. The purpose of these tests is to verify the correctness of the functional integration for all the components that do not require external dependencies.
To enable testing in isolation, we typically use mock components in place of the adapters and in-memory data sources for the repositories, configured under a separate profile. Tests are executed using the same technology that incoming requests would use (for example: http for a RESTful microservice).
The figure above shows testing dependencies between microservices that can be addressed with service virtualization. Service virtualization tests individual services without waiting for the deployment of other dependent services. Including latency between microservices tests helps to achieve realistic results. It is important to create and run component tests on each core microservice and include these in the build process.
Using a dashboard that tracks microservice performance between each build is also recommended, as it allows easy detection of performance regression. It is also required to test microservice performance from the UI to guarantee a high quality user experience.
(3) Performance testing of microservices
Microservices performance is crucial and executes performance tests at the unit level rather than at the application level. We need to ensure that performance tests can achieve the following:
- Be as realistic as possible and use real datasets
- Have load tests that represent the anticipated demand
- Be as close to a realistic production setup as possible
- Be tested from the cloud using load testing tools
Completing performance testing of microservices provides feedback on how well an application performs when a high number of calls are made to microservices or large amount of data is transferred on the network between individual services.
Users should use load-testing tools through the application of individual microservices to capture API transactions, to scale up the load, and to monitor infrastructure.
(4) Monitoring microservices
Microservice architecture introduces a dispersed set of services with higher success as compared to monolithic designs that increase the possibility of failure at each service level. Any given microservice can fail due to network issues or the unavailability of underlying resources. An unavailable or unresponsive microservice should not bring the whole microservices-based application down. As a result, microservices must be fault tolerant and be able to recover from these potential failures.
It is important to be able to detect these failures in real time, restore the services automatically, and understand the dependencies that exist between microservices. We need to ensure that these services run and perform within defined set of standards.
Monitoring is a critical piece of microsystem control systems. As the software’s complexity increases, our understanding of its performance and ability to troubleshoot its problems decreases. Given the dramatic changes to software delivery, monitoring needs an overhaul to perform well within a microservice environment. The following monitoring steps are recommended to maintain a stable microservice climate:
- Monitor containers and their contents
- Set alerts on service performance rather than container performance
- Monitor services that are elastic and multi-locational
- Keep an eye on APIs
- Map monitoring and organizational structure
Leveraging these five principles will help address both the technological and organizational changes associated with microservices.
Testing in microservices architecture can be more challenging than in a traditional, monolithic architecture. When combined with continuous integration and deployment, it grows even more complex. It’s important to understand the layers of tests and how they differ from each other. Only a combination of various test approaches will result in product quality confidence.
Main Author: Renu Rajani, Vice President, Capgemini Technology Service I P Ltd, firstname.lastname@example.org
Contributing Author: Ranganath Gomatham, Solutioning Program Manager, Capgemini I P Ltd, email@example.com