All Estimates Are Still Lies
Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.
- Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid
If you want to watch someone lie to you, ask a developer how long it’s going to take to build a feature or fix a bug.
Ask the question, “how long will it take you to ship the widget?”. The response you get will vary depending on the level of maturity of the developer you’re asking.
Junior developer:
I’ll have the widget finished before the end of the day.
Mid-level developer:
I’ll have that widget done in three days at most.
Senior developer:
It should take me between two days and a week to ship the widget, provided we can nail down requirements first and I don’t run into any complications.
When you start a career as a programmer, you’re eager to please. You want to make guarantees about your work and follow through on them. You want to be seen as productive, reliable and worthy of your job.
You quickly have that beaten out of you. Software, like much creative knowledge-work, is impossible to make scheduling predictions about.
No one wants to break the promises they make, so they stop making them. If pushed, developers will pad estimates, give you a vague range, or lie. They’ll tell you what you want to hear and the work will still take as long as it takes.
They do this because the person asking doesn’t accept a fundamental truth about creating software: it’s extremely difficult to accurately estimate how long it will take.
Why estimates are lies
Humans are bad at estimating any work. When we make predictions about the time required to complete a future task, we display an optimism bias. This bias is known as the planning fallacy.
The planning fallacy presents in everything from writing an article for your company website to building the Sydney Opera House. I planned one hour for this article but getting a first draft finished took about a week. The Sydney Opera House was expected to be completed in 1963, but a scaled down version was finally completed a decade later.
There are many proposed reasons for the planning fallacy. Maybe we place the most optimistic “happy path” - where everything progresses exactly as planned - higher than other outcomes with more complications. Maybe we have an overly positive view of our past performance, selectively remembering only work that went well and using that as a reference for future estimations. In some cases we may be misrepresenting the facts because subconsciously we feel that a lower estimate is more likely to win a sale or get approval.
Unfortunately, simply knowing about the planning fallacy doesn’t help you overcome it. Like any bias, concrete steps have to be taken to counteract it.
We all have different definitions of “done”. No matter how fine grained your requirements gathering process is, there will always be a gap between what you wanted and what a developer builds for you. This is the nature of software development. The only specification concrete enough to accurately describe software is software itself.
Let’s say Jane is a developer who can estimate all software tasks perfectly. When she makes an estimate, ships something to production within the estimated time and declares it “done,” what happens next? You’ll go to production, click around and come up with a set of differences between what you wanted and what the she delivered.
From your point of view, the feature isn’t “done” until your requested changes are made (because this is what you wanted in the first place). The work required to make these changes should have been factored into the initial estimate.
From Jane’s point of view, the estimate she made was for the work as she understood it, which she shipped to production within the time she estimated. The changes you’re making are additional to that work and should be estimated for separately.
This is an inherent mismatch, and there’s not much that can be done to address it directly. Even if estimates are perfectly accurate in the developer’s mind, they’re still lies as far as you’re concerned.
Software implementation tasks are inherently unpredictable. Even if you understood exactly what a client wanted, the time it takes to build it is extremely difficult to predict. This is in part because software systems are the most complex systems that human minds have ever had to understand and manipulate.
From Dizzying but invisible depth (you should read the whole post, but this excerpt illustrates the point I’m making):
Today’s computers are so complex that they can only be designed and manufactured with slightly less complex computers. In turn the computers used for the design and manufacture are so complex that they themselves can only be designed and manufactured with slightly less complex computers. You’d have to go through many such loops to get back to a level that could possibly be re-built from scratch.
Once you start to understand how our modern devices work and how they’re created, it’s impossible to not be dizzy about the depth of everything that’s involved, and to not be in awe about the fact that they work at all, when Murphy’s law says that they simply should not work.
The interactions between modern software systems arguably have more in common with biological processes than synthetic systems. They are so eye wateringly complex that no one person can understand the entirety of their functioning.
What this means for you and your project is that even for developers, all intuitions about how long a task should take are wild guesses at best. Very occasionally, you get lucky and a complex feature takes less time than you thought. Most of the time you’re unlucky, and changes you thought were simple require you to unpack a spiralling web of complexity before you can complete them.
Software isn’t getting any simpler, so this problem is unlikely to go away any time soon.
The estimation risk is always yours
When your goal is to deliver software, you have a few options. You might roll up your sleeves and build it yourself. You might decide to hire a freelancer or permanent employee to build it on your behalf. You might work with a development consultancy to walk you through the process and implement it for you.
In all of these situations, no matter how you structure the engagement, the risk of the project going over budget is always yours to bear. No matter what an agency promises you, no matter what your developers are telling you, the risk of a project going over the allotted budget is yours.
This is because no matter what agreement you have in place, software will take as long as it takes to build, and you’re the person paying for it. When you work with a developer for hire, be that full-time, freelance or through an agency, you pay for their time and materials.[†] The same time and materials always results in the same software, and you only know what software you get after it’s shipped.
There is no solution
You’ve read this far and read my description of the problem of software estimation being extremely difficult to reason about.
Unfortunately, I don’t have a good solution to these problems because I don’t think there is one. We make up rituals[π] to try to dance around the issues, but in truth we’re all trying to cope with the reality that software estimation is intractable. When non-developers tasked with software delivery fail to accept this, we respond to their dissatisfaction with process, vagueness and eventually lies.
So far I’ve said that software is difficult to estimate because:
- We’re all susceptible the the planning fallacy.
- There’s a gap between what clients want and what developers believe the client wants.
- Software is monstrously complex.
I’ve also said if you’re the person who’s responsible for delivering software within a given budget, then the risk that the budget won’t result in software that meets your requirements is yours alone to bear.
Even if we don’t spell this out for our clients, experienced software developers know these things to be true. We’ve had our professional reputations put to the test based on inaccurate estimates we’ve made and we know not to make them again.
At the same time, we want the people who hire us to build software to succeed. This is true both from a desire to be effective professionals that do a good job, and from pure greed: we don’t get paid if no one wants to build software. This could be our employers, our freelance clients or even open source software that we contribute to.
Rather than eliminate the budget and schedule risk entirely, we instead take steps to reduce and manage risk. Over time we develop a number of habits to do this.
Gratuitously reduce scope. We work with clients to shave scope down to the absolute minimum. The less complex the feature, the fewer requirements there are to misunderstand and the faster we can ship.
Break requirements down into the smallest possible pieces that can be delivered and tested individually. Smaller pieces of functionality are quicker to implement, quicker to test and quicker to iterate on.
Deliver early and often. Instead of features going out at large milestones, we ship works in progress as quickly as possible. This addresses the gap between what we build and what the clients want earlier in the process. We can catch discrepancies in our understanding of requirements earlier, and more quickly converge on software that meets them.
Give clients simpler alternatives to proposed solutions. When clients ask for functionality, if we take the time to find out what requirement they’re trying to meet, we can often suggest a solution that gets them 80% of the way there while drastically lowering scheduling risk.
Plan in as much detail as possible. Once requirements for a set of features are fixed, planning them in as much detail as possible can help to identify complications that can prolong implementation.
Pad estimates. Experienced developers know that they’re bad at estimating, so build plenty of leeway into estimates when they’re demanded. You should double the estimates they give you for feedback and iteration.
Work with developers that have shipped software like yours before. Part of the scheduling risk with software comes from the possibility that developers might come across a problem they haven’t had to solve before. Developers with experience in your problem domain have probably seen most of the problems you’re likely to have, so working with them will reduce this risk.
Use established tools that are commonly put to use in your problem domain. Your problem probably doesn’t require cutting edge technology. By picking proven technologies you take advantage of existing libraries and expertise that wouldn’t be available to you with the shiniest new tools.
Elicit feedback as quickly as possible after delivery. Clients should test out newly delivered software and give developers feedback as early as possible. This gives developers more time to iterate on it and converge on what the client wants.
These habits will help reduce the uncertainty in planning your software project, but nothing will eliminate it entirely. No matter how good your team, how well you plan, how quickly you ship or how quick you are with feedback, the only certainty is that your project will not proceed exactly as planned.
Links
- List of failed and over budget custom software projects
- Lies, Damns Lies and Estimates
- Planning fallacy
- Dizzying but invisible depth
†: If they’re working to a fixed-fee, then you’re trading the budget risk for a scope risk. They will have a fixed, inflexible idea of what the deliverable scope is that is different from yours (see above about how we have different definitions of “done”)
π: Planning Poker for example is an exercise in ritualised group self-deception, and is a depressingly common practice on software teams