Project managers generally like plans and estimates so we can forecast when things should be done and how much they may cost. It helps manage client expectations and answer the type of questions they ask, such as "When will it be done?" and "How much will it cost?"
So, when project managers hear about ideas such as "let's stop estimating," it can trigger a knee-jerk reaction. It sounds lazy and avoiding the hard work of having to estimate. It can seem like people want to shirk their responsibility and accountability. First, those lazy agilists wanted to stop doing documentation; now they want to stop estimating too!
There has been a debate raging since 2012 about the use and value of estimates on agile projects. It has spawned the #NoEstimates hashtag, a website, a book and countless blog posts and conference presentations.
Like many radical ideas, when we dig into “no estimates” thinking, there are some good ideas, sound logic—and a whole heap of misunderstanding around it. This article sets out to unravel some of it.
My Exposure to NoEstimates
I should start by declaring which side of the debate I am on. Initially, I thought I was firmly on the side of estimates—but not the wasteful kind of detailed estimates that other people make when there is only limited information available. Instead, I aim to create reasonable predictions of the likely effort bounded by the uncertainty of the input data. Then communicate the estimates as a range to reflect the uncertainty and refine them as more information becomes available.
Then I discovered this is closely aligned with what the more reasonable NoEstimates advocates suggest doing anyway. It is just that they call them forecasts, not estimates. I can live with that; names are only labels that matter less than the meaning behind them. So then I felt conflicted.
NoEstimates Gets Started
In 2012, Woody Zuill wrote a blog post asking if getting better at estimating was the only way forward. He wondered if creating forecasts based on observations of completed work rates could be an alternative. The movement started, then more radical people got involved. They triggered more extreme counter-arguments against NoEstimates. However, let's de-escalate and look at some of the ideas that started the discussion. Then people can make up their minds based on their unique environment.
For software projects following an agile approach, the team is often asked to estimate the development effort for stories in the backlog. As the team starts work on these stories, they may discover some are more complex than initially anticipated and need splitting into multiple stories. Others go as planned, and others get replaced by higher-priority changes and never get developed at all.
Throughput-Based Forecasts as an Alternative to Estimates
The effort and rigor of the estimation approach should be proportional to the quality of the input data. It would not be a good use of funds or people to apply time-consuming Monte Carlo analysis to a collection of guessed durations. So some NoEstimate enthusiasts classify over-analysis and more structure than the input data deserves as a form of waste.
We can argue even the act of estimating with imperfect input data is still valuable and gets the team discussing work that can uncover insights. Also, when spending other people's money, it is the responsible thing to do. However, let's follow the NoEstimates logic before making a judgment.
As an alternative to estimating all the stories for an upcoming iteration, NoEstimates supporters suggest a throughput-based forecasting approach. So, if the team completed 20 user stories last iteration, let's assume they will complete 20 this iteration also—and use the time that would have been spent on estimation work toward building new functionality.
It is easy to spot the flaw in this logic. What happens if these next 20 stories are much bigger or more complex than the 20 just completed? The answer is the team will not get through 20 of them, and the forecast will be wrong. However, before we dismiss this approach as unworkable, let's examine some of the things that can go wrong with traditional estimates on a software project.
We can spend a bunch of time estimating tasks, and they still turn out bigger than we expected or more complicated when we get into them. We can also spend time estimating work that gets deprioritized or swapped out with new work.
"The Same Poor Results, But with Much Less Effort"
Around 2006 at the agile conferences, I recall a presentation by Motorola on the effectiveness of using planning poker estimation compared to its previous, high-rigor approach. Motorola contained some CMMI Level 5 departments that conducted very detailed analysis and structured estimation sessions.
Part of being CMMI Level 5 is continuous improvement. So, in addition to rigorous estimation, teams investigated the root cause of any estimation errors to improve the process for the future. As an experiment, three teams substituted planning poker for their in-house estimation process. An additional three teams continued estimating as usual, producing detailed estimates with reviews, root cause analysis and ever more detailed estimating guidelines afterward.
The results showed that even the teams that followed the detailed in-house process were quite weak at estimation. Some items were overestimated, and many more were underestimated. Root cause analysis served to create ever longer checklists but did little to remove the variability of estimating work with human variation, uncertainty, complexity and risk.
The planning poker teams faired no better; they too were—at best—mediocre at estimating, often missing issues that created delays. The significant difference was that planning poker was much quicker than their existing estimation approach, and the team members enjoyed it much more. I still recall the presenter's summary that comparatively, planning poker provided "The same poor results, but with much less effort." Motorola was sold, and that department switched to planning poker.
The NoEstimates camp takes this lesson one step further. They assert measuring throughput is as good (or more accurately, just as poor) a way of forecasting as estimating via hours or points—but crucially requires much less effort. When spending other people's money, should we focus on building valuable deliverables or creating estimates that, even with a costly rigorous process, are—at best—mediocre?
Of course, there are shades of grey. Some software is knowable and can be estimated reliably. Perhaps we should use estimation there. Some clients expect due diligence and evidence of planning rigor, and may want to know work is being estimated and not just started without due consideration.
I have worked with both types of forecasting. Each time throughput-based forecasting was used, the team had started forecasting with points. Then as the team became stable and trust was developed with the client, we found throughput metrics were reliable indicators of delivery dates and capacity.
Modern agile project management tools track when stories change state from, say, Ready for Development through Coding, Testing and Done. If this process is averaging five days per story and the product owner asks how long it would take to add some new high-priority feature, then—providing it is not too complex—probably about five days.
Likewise, if it took two months to complete the last 50 stories and there are 100 left in the backlog, then it will probably take four months to complete the project. No developers were interrupted to create those estimates.
Where I have used cycle time and throughput analysis to create forecasts, we have also tried to be smart about it. The team knows the process will only work if stories are sized appropriately. So they spend some time looking for anything that may be architecturally significant to investigate early via a spike. They also spend time splitting large stories—and sometimes tag stories with t-shirt sizes such as small, medium and large.
It can be argued that this is a form of estimation, and I would agree. But rapidly attributing small, medium or large to a story is quick and can be done by a senior developer not requiring (or gaining the wisdom) of the whole team.
Now we need to do some basic math with our forecasts. Small, medium and large should be relative, so a medium is two smalls, and a large is three smalls. Now the team can deliver 60 small stories or 30 medium stories or 20 large stories per iteration—or more likely, some combination such as five large, 10 medium and 35 small.
We have freed the team from estimating work (which even the best are not very good at), so it can focus on building products that hopefully it is better at. We can use throughput data that is provided free by our tracking software to forecast completion dates. Using these estimates multiplied by the team burn-rate (plus other costs), we can also generate cost estimates.
I have only experienced throughput-driven forecasting working well with stable teams in high-trust environments. These teams had previously estimated their work and were very aware of their burn rate compared to budget and remaining work.
I fear teams not so aware of the need to deliver might work slower than if they had been through that experience first. Some environments and cultures find not estimating irresponsible. Also, some types of development can be estimated quite accurately, and so individual estimates could provide superior forecasts than tracking average throughput.
This article has described throughput-based alternatives to traditional task-based estimates. Some NoEstimates zealots deem that to be waste also, and believe teams should just work until done. However, I have yet to meet a sponsor not interested in evidence-based progress and completion forecasts.
Reducing or eliminating estimation can save considerable effort. When the last team I worked with made the switch, we saw the number of stories complete each week increase by 8% as more time was dedicated to development.
I teased the team members that this increase was because they were now making the stories smaller, and to determine if productivity had really gone up, could they please estimate the points of the recently complete stories so I could compare it to their old velocity figures. They saw the irony—but did not comply.
The NoEstimates debate could be recast as a discussion between bottom-up parametric estimating (team estimation via counting points) versus the heuristic estimation of throughput per iteration. Both approaches have their benefits and shortcomings. I realize this kind of simplification will anger members of both camps, but probably not the majority who are not vested.
Some organizations value the process and information brought about by bottom-up estimating. Others would rather redirect that effort toward development and use a different (even if inferior) heuristic estimation. Sponsors should always be asking, “How long will it take?” and “How much will it cost?” We have options to help answer those questions; ultimately, it should be the sponsoring group's decision about how it wants to see it happen.