Planting a DevOps garden

More and more companies are concluding that DevOps is the way to do this. And the grass is always greener on the DevOps side, right?

Any DevOps transition involves the implementation of new tools; a build server here, a code analysis tool there—not to mention an automated end to end test set. Preferably all of this is shown on a monitor where the graphs are as colorful as a Spring garden. But once this DevOps garden has been planted, you can’t just sit back and watch it grow.

Trouble in paradise

I’ve seen the same test automation challenges in multiple DevOps gardens. A lot of effort is put into automating an end to end test set, using a UI interaction tool like Selenium WebDriver or Protractor.  Several dependencies (on data and browser for example) make them slow and brittle. As a result, the tests often fail due to instability or defects in the tests themselves, rather than the real defects they are designed to find.

Within my current team we had a test set of 3500 tests (500 tests on 7 browsers) of which 500 failed regularly. Nobody bothered to look at the tests, and I have unfortunately seen real defects go unnoticed because of it. When this happens the test set is useless, yielding no return on the (often large) investment in test automation. A useless automated test set can ruin your DevOps garden and compromise the quality of the applications it grows.

In a DevOps environment there is always something new to implement, which leaves little time for routine maintenance like fixing tests. Besides, maintenance takes a lot of work, and, more often than not, there is only the tester to do it. But, if you want your automated test set to deliver actual value, you had better reach for your shovel because you have some DevOps gardening to do.

Care for your tests

Fix your tests with the whole team to create a sense of common ownership. Getting the whole team engaged in quality assurance can be challenging, but is potentially the best way to ensure quality.

If you have numerous unstable tests, you may want to consider the following options:

  • Split up your test set (by test type of risk). It will improve your focus and can also decrease the risk of instability since your test instance is up for a shorter time.
  • If the same tests are failing repeatedly, fix them. First, take a critical look at the tests; maybe the test isn’t relevant or can be tested more efficiently. Preferably, test as many logic possible on unit and integration level (see also Mike Cohn’s test automation pyramid). If not, fix any glitch in the tests or find another way to test it.
  • If different tests are failing, analyze the causes. You may have connection issues with your test server or the servers your tools use (Saucelabs/Browserstack for example). Building in a retry is one way of reducing instability.

Whatever you do, fight for your tests and don’t give up—there is always a solution. Within my current team the garden was quite a battlefield but our test set is now delivering value again. We split up our 3500 tests into functional tests running on one browser and visual tests on all browsers, resulting in a total of around 700 tests. We then systematically picked up the failing tests, removed irrelevant tests and found out that still every time some tests were failing. Building in a retry proved to be a great solution for that.

Once your test set is healthy, develop the discipline to keep it that way. Use the colorful Spring garden we call monitoring for failing tests, and to create transparency in your environment. This will allow you to identify real defects so that the automated test set can deliver some actual value, resulting in a flourishing DevOps garden!