It must have been about twenty five years ago. I was working for a large international consultancy firm. One of the reliable ones. The ones that you would think that had everything worked out. But I guess this was merely the product of my imagination.
At one time two colleagues and I were working on an estimate for a bid for a software development project. Now the three of us together, despite the fact that this occurred long time ago, had quite some years of experience in the field. So you would reckon we could come up with a decent estimate. And we did. In fact, we created two estimates. One by counting screens, based on a point scale and a real-life velocity. A very popular and suitable technique as we were building desktop applications. Next to that we created a work breakdown structure. Using both techniques we estimated ALL the work in the project, not just coding or coding and testing. Not much to our surprise the results from both estimates were comparable. Happy and confident with the result, we presented our estimates to the account manager in charge.
It was at this particular point in time, at that very instant, that my faith and trust in our industry vanished into thin air, not to return again. Not even more than glancing at the work we’d invested in our estimates, the account manager merely looked at the end result and added 20% to the overall price for the project. For contingency, he claimed.
Up to this deciding moment in my career I had never even heard of the word. Contingency. And still I refuse to stop and think what the impact of that little word is to this industry and probably others too. To sum it up, every confident estimate can be killed by (account) managers adding seemingly random percentages. Just because they don’t trust your estimates. Or they still remember the court cases from previous projects. Frightening all together.
Needless to say we lost the bid. Our competitors were less expensive.
Now I wouldn’t write about this unfortunate event if I hadn’t met a similar situation only a few weeks ago. You might imagine we’ve learned a thing or two in the twenty five years since. Alas. We didn’t. And it’s industry wide I assume.
With a small team we had worked on an estimate for re-engineering a client/server desktop application to a .Net web application. We estimated based on smart use case points, a technique that we have used and refined in agile projects over the years. A reliable estimation technique. The estimate resulted in just under 500 smart use case points, which is a measure for the size and complexity of the system. Given the fact that we have executed multiple similar projects successfully we were able to establish that it takes about 8 hours per smart use case point to realize the software. And yes, we have actually gathered metrics! These 8 hours include ALL work on the project, including project management, documentation, testing, setting up the development environment etc. Simple calculation tells us that we would need 4.000 hours to do the project. So this was what we coughed up for the responsible account manager.
Without so much as a blink of the eye an email came back from the account manager stating that he added 20% to our estimate, and that he would communicate this new number to the client. Leaving us with a lot of questions of course.
So the interesting question is: where does the 20% come from? You would expect that it originated from having evaluated software development projects for years and years, and comparing original estimates in those projects with the final outcome – given the fact that requirements change, teams change and technology changes. But to be quite honest, unfortunately I suspect this is not case. There’s but only a few organizations that actually measure like this. I assume. And then even if he had done that, would it be exactly 20%? Why not 17.6? Or 23.17? Exactly 20%? Or maybe the account manager know his statistics. Statistics claim that developers estimate 20% too optimistic. As we developers are optimists. But this was a team estimate including all work on the smart use case on the backlog.
To cut to the chase, if a team estimates the amount of work in a project, especially on a relative scale, as an account manager this is what you should believe, as it is the best estimate available. This is the yesterday’s weather principle. The best way to predict today’s weather, is to look at the weather from the day before. No more, no less.
Adding 20% – or 10% or 15% – just as a rule of thumb is not based on any real-life experience. In my opinion such actions show that the account manager puts no trust in his own people, in his own organization. This would be similar to saying that if it was 10 degrees yesterday, we’ll just state that it will be 7 degrees today, without having looked at the weather forecast.
So what’s the message here? Please dear account managers, put some trust in the people who try to create as estimate as accurate as they can. Don’t add 20%, or 10% for that matter, just based on gut feeling. And more important, dear software developing organizations, start measuring your projects! Gather metrics so future estimates become much more accurate and realistic, and future project proposals and project will be less off target than so many projects I witness. And on the fly restore some of the trust so many people have lost in our industry. Let’s learn.
This post is published in SDN Magazine.