Testing is a funny topic to discuss, on one hand its essential, and professional testers are passionate about methods, good practices, etc and on the other it’s a topic that many would prefer to ignore. I find that I get very interested in the topic when I meet a real expert who can raise points that I simply hadn’t thought about before. This has just happened twice; first I got a preview copy of the World Quality Report, more of that later, and the second was to hear a real expert on testing give her views.
The tester in question is Google’s Chief Test Engineer, Goranka Bjedov, who made the case at the recent STAREAST 2010 Testing Convention that we are ‘heading towards developing software with testing for quality and this practice may not be a bad thing’. It certainly gets my attention.

Her point is of course somewhat deeper than the sound bite, but what makes it really interesting is that Google has as much, or more experience than anyone of building and operating using web-based delivery, and their own form of virtualisation as the underlying platform. Goranka breaks testing down into two main categories; productivity and quality. Her view is that productivity is about machines and their ability to process the code, as such it can be highly automated, with the resulting failures clearly defined and easy to fix. In the same way performance can be refined and in web services that’s a critical dimension. The outcome is that productivity testing should ensure systems don’t fail when new code is processed and that’s important.
By contrast Goranka’s view is that quality testing is about how the human side uses and perceives the functionally, and that this is complex, and not easy to test, let alone automate, or even to fix the problems found. The challenge is not just the cost, but the time it takes to do the level of testing involved to obtain the same level of quality as a custom software development for a traditional enterprise application. The question that Goranka poses is whether this is acceptable in the new world of quick to deploy services, where the time and cost of the development is very low, but the complexity of all the ways that users could combine the resulting services is very high.
There is one other point that comes into this around ‘virtualisation’, or at least the Google way of obtaining the same effect from their mass ranks of base level processer boxes. There is an element of testing that deals with system failure and confirms graceful recovery is possible. At Google they miss this out relying instead on their virtualised hardware to make sure there is always a live system. Taken all together it’s a fascinating argument for rethinking testing for cloud-based services, a point that I have been concerned about in the past as I see the sheer volume of quick build software rising in these new environments. If you want to know more about the ‘Google way’, or thinking on the topic then they are holding the next Google Test Automation Conference, GTAC, later this year in India with the theme ‘Test to Testability’.
For now conventional software testing is going to be more important for most IT departments and in particular for CIOs to know how good the quality, and yes I do mean the quality in this case, of their software is against the norm. This brings me to my second topic; the World Quality Report – 2010 edition. As the report itself comments it comes at a moment when many pressures have come together ranging from recessionary budget cuts to new development techniques, (Agile is reported as playing a larger role as an example). The difference in various industry sectors shows up clearly as well. It’s well worth a read and maybe, together with the views of Google on testing the next generation of software, worth spending some thinking time around how testing should be addressed in your plans.