As testers we have long wrestled with the tensions between the agile approach to projects and how we can use exploratory testing in a commercial project environment where we need to be able to provide test scripts as a deliverable to our customer and show executed test runs in order to provide an audit trail to demonstrate to our customers and their auditors that test coverage has been achieved.
Measurements from projects that I’ve worked on over the last two or three years have shown that we spend approximately 40%-50% of our test effort on designing and writing test scripts. But, as our developers make use of more sophisticated development tools and reduce their development times it then appears that testing is taking a bigger and bigger proportion of the overall effort – or in practice we are being asked to reduce our testing effort by the same degree. In order to make test execution faster we need fewer defects in the applications leading to less need for retesting – and there is little evidence to suggest that this is the case. Even where the solution is a package implementation (and therefore little custom code to contain coding errors) we still see similar levels of defects in translating the requirements into the package configuration. Of course automation can reduce execution time in the longer term but when we are looking the cost of a project from start to implementation then the benefits of automation have not been realised – and indeed the investment costs of automation would inflate the testing development and execution in this time window.
So the obvious area to look for improvement then becomes the test design and scripting and I’ve tried a number of techniques over the years which have had some small success:

  • Test design templates to give more consistency and a more rigorous approach
  • Introducing two stage design process with a review after the high level design in order to minimise wasted effort on incorrect test design
  • Designing high level test cases which document the overall purpose of the test and expected results and documenting the detailed instructions for each test step when the test is run for the first time.

The last of these is the key area that the latest generation of manual test tools are starting to show signs of beginning to address. Some of the more recent tools have the facility to record the testers activity in GUI form in a format that starts to resemble a manual test script rather than a piece of code. These scripts clearly show which form the user has entered, which fields they have navigated to, what data values they entered and what buttons they pressed – capturing an image of the form at the same time. This then allows the tester to work from a high level description of the test and perform exploratory testing with the tool capturing a script of the tests performed so that the tester can then follow the script to rerun the same test in a second test cycle. If these tools live up to their promise then we may be able to reduce our scripting time significantly and begin to work in a more agile way without losing repeatability and auditability in our testing. I’m planning to do some evaluation of tools from Original software soon – so watch this space!