On September 8th I had the good fortune to watch a World Cup rugby match in Lens, France, between England and the United States. England, the former World Champions were expected to treat the US Eagles with due respect, but still beat them comfortably. This didn’t happen.
At least, not entirely. England did win 28-3. But they played badly. England were lambasted for being flat and pedestrian, lacking adventure or ambition and plenty of other things besides.
What is curious is that the coach, Brian Ashton, didn’t seem to know what went wrong. “I can’t put my finger on it”, he said. Worrying times for England, then. So what’s the real cause to this malaise? With all the mind-games going on post-match, it’s hard to tell. The pundits don’t agree on anything, other than it being a poor performance that didn’t live up to expectations. Most agree that England probably couldn’t turn it around in time for their next match, against South Africa…indeed, South Africa won 36-0. Ouch.
OK, but what’s all this got to do with testing? Well, as I was watching the game and talking about it afterwards, I kept seeing parallels with our Testing discipline. I kept hearing the same language being used around me that I use daily in my profession, but we were talking about something entirely different.
Naturally I spoke to fans of both sides, to plenty of French fans, but I was actually also lucky enough to talk to a journalist who was at the post-match debriefing. He gave me some interesting insights into what goes on there. Allow me to share a few of his, and a few of mine with you.
Breaking testing down into it’s most basic component parts, you have planning, preparation, specification, execution and evaluation. The amount of planning, preparation and specification is more or less equivalent to the levels of perceived risks (business, financial, technical, PR, etc..). The amount of execution is in practice almost always down to how much time there’s available, how much you’re able to defend against other parties before you in the SLC ‘taking’ time from you, and how much you’re able to influence decisions on deadlines after you in the SLC – notably acceptance and go-live. The evaluation is where you look at your test team’s performance and learn from it for the next round of testing or next project. This is often skipped or done insufficiently, as the delivery of the final (operational) test report signifies to most the end of the need to spend any more time or money on testing.
Now the rugby perspective (note they use almost the same terminology): breaking rugby down into it’s most basic component parts, you have planning, preparation, specification, execution and evaluation. Yes, also specification! These day’s it’s all about the game plan. Sitting down with the team in a room, discussing (designing) they way you want to play against a particular opponent, then going out onto the field and practicing it. After which you hope that you can execute the game plan well on match day. Then, the amount of planning, preparation and specification is more or less equivalent to the levels of perceived risks (experience of the other team, whether the opponents’ style is attacking or defending, etc..). The quality of execution – how well you play – is in practice almost always down to how much time you have the ball in your hands, how much you’re able to defend against the opponent ‘taking’ the ball from you, and how much you’re able to influence decisions – notably the referee’s. The evaluation is where you look at your team’s performance and learn from it for the next encounters.
This is something that is never economized out or skipped over. The hours of video footage that is analysed afterwards is mind-boggling. The rate of professionalisation within the game of rugby – a process that started 12 years ago – is frightening. This is because the evaluation activity is one of the few weapons that a rugby team now has to allow it to develop. Coaches from many sports including hockey and swimming also follow this philosophy.
Perhaps we should take a leaf out of their book: actually take the final phase in testing seriously and allow ourselves to better industrialise the test process, for example?
As in all sports, expectations are rife. Either high expectations because you’re a devoted fan and your side has been doing well lately; or on the other hand, low expectations, if you are en England fan. Of anything. (Having said that, England did also win at Rugby, Football and Cricket on the same day.)
Also in testing, expectations are incredibly important. Software Testing is all about the acceptance of a software product, based on the expectations of the stakeholders…who hopefully already are or will become fans of the delivered solution.
Testing is an activity geared to providing the basis for acceptance. Indeed one of 2 magic formula’s we use is derived from the consultancy world (Maier’s law): E = Q x A [Effect = Quality x Acceptance]. The quality of a solution might be outstanding, but is if doesn’t meet the expectations of stakeholders, it’s not accepted (easily or quickly), which can become costly. So managing expectations is the reverse side of testing. We do not just find bugs, but we provide the intelligence that proves the degree to which a solution meets the various expectations.
England neither managed expectations well before the match against the US Eagles, not delivered what we all expected of them during the game. So, despite a comfortable ‘numerical’ win (23-8), we’re all dissatisfied.
Each error in the game (handling-error, unforced error, or foul), gets immediately reported on by the commentator at the time and then analysed afterwards by pundits and journalists and of course, the fans. Then you have the many hours of video footage that are examined by diverse coaches to learn anything they can to gain that vital extra 2% improvement in any area they can. Also the match commissioner examines video footage for any situation that the referee should have seen that would have potentially warranted a red card. The result of which is a citing and hearing. So even the referee gets to learn and improve after the fact.
When we register findings as testers, they are assigned levels of severity and urgency to enable the development team to repair the most serious and urgent ones first. We ‘drive to resolution’ the bugs we find, test again and are generally satisfied that we’ve put things right.
With rugby it’s not that simple. England got it all wrong against the US Eagles, but (straight after the match) they couldn’t see where they went wrong. Would they ‘drive to resolution’ all the findings from their own post-match analysis? Whatever the case may be, England could’t afford to do the same against their following opponents. They needed to find ‘that extra 2%”. In more than one area. Fortunately they did.
“That extra 2%” is an as yet unknown concept in testing, or for that matter in software engineering as far as I’m aware. Perhaps something else we can learn from the rugby world? I believe that this sort of approach may well lead to the next generation of testing: “Testing 2.0”. I expect it will…I mean, I hope it will.
CG RWC logo.jpg