Thursday, September 5, 2013

Should you re-estimate work?

Hi, folks.  This week, I'm potentially making a "land war in Asia" level mistake by weighing in on one the "holy wars" around Agile projects.  Specifically, should you re-estimate stories once you've estimated them the first time?

This might strike some as a crazy thing to have a discussion about.  Why wouldn't you update your estimates as you learn more?   Our estimates should be as "good" as they can possibly be.  If we've learned something new that influences "how big" a given chunk of work will be, we should update our estimates.  Shouldn't we?  

Actually, maybe we shouldn't.  There are some reasonable arguments made that we SHOULDN'T strive for keeping our estimates as "good" as possible.  Estimation effort is potentially costly.  Estimates will never be accuracy.  And there are some reasonable questions around whether updating estimates actually makes your project predictions more accurate or not. 

This week, I want to take a look at why we have estimates, what the schools of thought on re-estimation are, and how I'd recommend approaching re-estimation.

Why have estimates in the first place?

That's actually not a rhetorical question.  While "you have to estimate everything!" is deeply ingrained in most software professionals by now, it's not completely obvious that we HAVE to have estimates to  build software.

As folks in the "lean development" camp will point out, work can flow just fine independent of a "schedule."  (I recommend reading some of Mary and Tom Poppendick's work on lean software if you want to explore this further).  Just let the team work out the most valuable stuff, have them work on it, and release to production.  If you're able to deliver work continuously, you don't necessarily need a great master schedule of "when will you be done?" to steer your team.  (I mentioned this from a different angle last week).

"Should you estimate at all?" is a topic for another time.  What I want to point out is that estimates are not per se valuable.  They are not an end in themselves.  They are a tool, used to answer a question.  "Having good estimates" is not the goal.  "Being able to understand the project" is.

Also, estimates are costly.  Every minute your team spends estimating software is time spent by your team on activities that do NOT result in any valuable working tested software being built.  Estimation time takes the team away from delivering value to do something else.  If the estimates provide valuable insight, they might be worth the investment, but they're not free.  All else equal, we should try to minimize the time spent doing tasks that don't result in building valuable software.

On most Agile teams, a common practice is to do fairly lightweight "relative size" estimation of work.  We then use velocity (the team's speed at delivering over time) to project how fast the team can deliver work.  Mike Cohn's book Agile Estimating and Planning is a great reference on this.  He also has a video of a presentation on the topic that's a good intro if you're unfamiliar.

For the rest of this blog post, I'm going to assume you're familiar with relative size estimation and velocity planning.

Getting back to my not-quite-rhetorical question at the top of this section, I'm going to assert that we estimate on Agile projects to allow us to answer two related-but-not-identical questions:
  • How much time do we think it will take to accomplish a given large scope of work? (the MACRO question)
  • Which specific pieces of work is it reasonable for us to take on in the near future? (the MICRO question).
The macro question is the "big picture" question - when is the new website likely to be ready?  It's also effectively the "how much does it cost" question - cost can be projected as "run rate" cost for the team ($X per week)  multiplied by time).  This is the "looking forward six months, where will we be?" question (As I talked about last week, there's a question on whether we should be thinking about our work in such long chunks, but that's a different discussion).

The micro question is the "what can we do right now" question.  It often boils down to "which stories off our backlog do we think can fit in the next n-week iteration?"  If we know our velocity is 20 points, we can in theory count the points on the work already "in" the iteration, and decide whether "one more story" will fit.

Why would we potentially want to re-estimate?

Again, for purposes of this blog post, I will posit a project that has a relatively large scope of work that needs to be released together.  The project team did a discovery/inception process, and found a number of user stories.  They did a relative-size estimation on those stories, projected a velocity, and produced a burn-up chart to project a likely delivery date.  At the start of the project, this was the best information the team had.

As the project wears on, however, the team learn new things.  Assumptions we might have made when we put our original updates together might no longer be true.  They might have learned that integrating with an outside system that they thought would be easy is actually a nightmare, and we have 20 stories that need to touch that system.  They might have changed their architectural approach in a way that makes some stories easier, and some stories harder.

When the team learns these things, they are faced with a question - do we update our estimates based on our new knowledge?  Or do we leave our estimates "as-is?"

The case for re-estimation

Some teams think the answer is obvious - our estimates should be the best they can be, to give us the most "accurate" picture of the project possible.  If we learned something new that impacts how long a story will take, we should re-estimate the story.

Doing frequent re-estimation will tend to give this team a smoother velocity over time (because stories are always as "right sized" as the team can make them before playing them, so we don't get "bumps" due to a story being bigger or smaller than its estimate).  This team will be better able to answer the "Micro" question - this team can use their velocity much more accurately as a check on "can these stories fit into a 2 week iteration?" 

A team that re-estimates frequently believes their long-term macro estimates are more believable because they've "baked in" their best knowledge.  However, their long term estimate is more likely to fluctuate iteration-to-iteration, even if velocity is steady, because the number of points in the release will fluctuate as estimates change (hopefully around the a "steady" middle point, but there will be some variation).

The net belief is that re-estimating makes our micro planning better, and makes our macro estimates no worse and in some ways better.  So while re-estimating involves investing more effort in our estimation process, it's worthwhile.  

Philosophically, re-estimators will argue that the "don't re-estimate" crowd is tolerating bad information.  We know that software estimation and planning is an inherently imprecise exercise.  When we are presented with an opportunity to improve the information we can give others about the project, we should do so. 

The case against re-estimation

The opposite school of thought is that you should not change your estimates from what you thought initially, even if you've learned more.  Teams that follow this approach would likely bring up the following points:

First, estimates are ESTIMATES.  They're not intended to be perfect.  As long as they are ON AVERAGE correct (roughly same number above or below), from a macro perspective, those inaccuracies will even out over the course of the release.  Re-estimating (they argue) creates an illusion of accuracy on an inherently inaccurate exercise.  We know there will be unexpected bumps no matter what we do, so let's not worry too much about attempting to smooth them out.

Second, they will point out that, while IN THEORY teams that re-estimate will improve estimates in both directions, in practice people tend to re-estimate UP more than they re-estimate down. 

This risk of "net estimation up" usually (in my experience) comes from an asymmetric application of applying risk in estimates.  Let's say we have a story that was estimated as a 2-point story.  From past experience, some stories similar to this one were "real 2's, but some were more like 4's.  It might be a 2, it might be a 4.  Let's make it a 4 to be safe.  Now consider a story estimated as an 8-point story.  We know some similar stories were "real" 8's, some were really only 4's.  Let's leave it as an 8 to be safe.  Even without ill will, the natural inclination in both cases is for uncertainty to ratchet more stories up than it ratchets down.

In practice, this means a team that re-estimates frequently will have its total estimate of the same backlog ratchet up over time.  They will also have their velocity ratchet up (since over time they'll be doing more and more stories that have had "slightly larger" estimates applied.)  The end result may be the same in terms of time taken, but the metrics will be harder to read.  The "don't re-estimate" school will argue that re-estimating actually hinders our understanding of the "macro" question.  Our "scale" for "what does a 2-point story mean?" will change over time, so simple linear assumptions around velocity and scope won't work properly.  

A generalization of this argument is a belief that re-estimation inherently causes "drift" in our estimation scale.  We began the project with a number of stories all estimated together with a consistently low level of information.  As we go, we have some stories retaining the "relatively little information" estimates and other stories with "more information" estimates.  Does a 2-point story estimated with more information really contain the same amount of work as a 2-point story estimated with less information?  The suspicion is a "mixed" backlog containing some stories that are re-estimated and some that aren't has an "apples and oranges" problem that make it hard to apply a single "velocity" number to consistently. 

The "don't re-estimate" folks will agree that if we don't change our estimates, deciding what stories we take into an iteration will be a less mathematically consistent exercise - if there are three stories available that are all estimated as 2 points, the team might feel comfortable picking up one of them (which is currently believed to be a "real" 2), but not feel comfortable with a different story (which is a 2, but has a lot of "known issues" that likely make it bigger).  The "don't re-estimate" school sometimes argues this is a benefit and not a problem - it means choosing stories for an iteration has to be a conversation with the team, and not a math problem for the project manager.  If nothing else, they'd argue that the time it takes to talk about "this 2-point story, not that 2-point story" is probably less than the time we'd spend re-estimating stories (which we do on stories that might turn out not make a difference in the story selection exercise).

The net belief is that re-estimating isn't a high-value investment for the amount of micro predictability it potentially brings, and potentially actually makes our macro predictability worse. 

Philosophically, the "don't re-estimate" school believes estimates are inherently imperfect, and that trying to tinker with them might be well intentioned but actually introduces more uncertainty than it removes.

To re-estimate or not to re-estimate?

I don't think either extreme position here is completely right.  However, my sympathies are closer to the "don't re-estimate" crowd.  I think changes in scale and "ratcheting up" are a real (not a hypothetical) risk.  I also believe the "macro" question of "when is the release done?" is of considerably higher value to the project team than the "micro" question of what fits in the next iteration.

Also, in my experience, quite a lot of the "always re-estimate!" teams I encounter are teams that are either new to Agile, or teams whose management don't completely trust the teams.  In both cases, re-estimating is done not for predictability, but for "accounting" reasons.  The team re-estimates stories because they are afraid that if the pull in a story that's "bigger than it looks" from its estimate, it will cause their metrics to show a drop in "productivity" and someone will overreact to a percieved problem.  "If we don't get 20 points done this iteration, the development manager going to yell at us for 'missing our velocity target.'"  This is solving the wrong problem - the issue is really an EXPECTATION problem about what estimates should mean.  Investing significant "no value add" time to try and make your estimates look "accurate" isn't going to solve that underlying expectation issue.

That said, I think the "never re-estimate ever" position is too extreme.  There are times when we've genuinely discovered that the way we're going to solve a problem is nothing like what we'd assumed initially and will require a radically different amount of work.  Our estimate is for what's effectively a different story than the one we're actually going to do.  Never accounting for that work to our project plan hurts us both in the micro and macro scales - if the project will genuinely take longer (or shorter!), let's say so.

I have two rules I'd recommend for "when do we re-estimate?"

First, I recommend when we do our initial estimates of a story, we record any key estimating assumptions we are making that the team agrees are key to driving us to choose which "bucket" to put the story in.  e.g. "This report can be built entirely on data that's already in RDB.  No additional data sources need to be built for this story."  My rule of thumb is we should only really consider a story for re-estimation if at least one key estimation assumption is violated - if the story differs in a SIGNIFICANT way from what we thought at the time of the initial estimation.

Second, I recommend a "two-bucket" rule.  If we're using "Powers of 2" for story points, don't spend time re-estimating a 2-point story that we think MIGHT be a 4 point story.  Only talk about it if we think it's at least POTENTIALLY an 8.  Don't spend time on the 4 that might be a 2 - only talk about the 4's where we could make a case for them being a 1.  This doesn't mean we can't decide to only move the story one bucket at the end.  Rather, our filter on "does this story really need to be re-estimated?" should be "it's so far out of whack that it MIGHT be MORE THAN ONE bucket off."

The purpose of the two-bucket rule is to keep us from arguing around the edges - "Is this a large 2 or a small 4?" isn't a high-value thing to get right (and is the situation most likely to lead to "ratcheting up" for risk).  We only want to talk about the ones that we think are BADLY mis-categorized.  Those are both the ones that are likely to have a major issue, and the ones that are potentially the biggest issues for our "macro" predictability.

Here's how I see this working.  We do our initial estimates.  As we're pulling together details for the "up soon" user stories, the BA is regularly reviewing progress with the devs, QA, and product owner.  As part of the conversation about the story, we should at least look at the estimate and assumptions.

"...So that's the story.  It's a estimated at a 2.  Anyone want to holler about the estimate?"
"Hmm...I know it's a 2, but I'm wondering if it's maybe a 4?  There's a few tasks that might be big, and that assumption about data sources is totally wrong."
"OK.  Do you think it's 'maybe a 2, maybe a 4,' or 'definitely a 4, maybe an 8'?"
"There's no way it's an 8."
"OK, then let's leave it at a 2 and move on." 
"Fair enough."

By focusing your re-estimation effort on the clear outliers, you can hopefully avoid getting mired in a lot of debate about things that don't significantly improve your predictability. 

Thanks to many colleagues and now-ex-colleagues at ThoughtWorks who've provided feedback on earlier versions of this rant...

3 comments:

  1. I like this post and it very succintly verbalises lots of valid aspects to re-estimate or not to re-estimate I've experienced in my time as an agile practitioner.

    Personally I've found from experience on long running projects (greater than a year) that re-estimation has more value to a team than just greater accuracy of estimates. Often in these cases the story backlog is put together and estimates are created by people other than those doing the work. While this is not the ideal, I've seen it frequently happen because of team turn-over or companies restricting pure agile practice - such estimates owned by "the estimation team", believe it or not.

    When this occurs I've seen re-estimation by the actual team delivering the work as part of an iteration kick-off meeting a good way for the team to both take a collective ownership of the estimates as well as find effeciencies within the team - that is for team member "A" the story is an 8, but for team member "B" it is only a 2. By the end of the re-estimation all team members feel the estimates are theirs, and not handed to them from on high from someone before their time. When the estimation meeting is baked into the iteration schedule at the start of the iteration it has the added benefit of giving team members an opportunity to ask questions about the stories and understand the context of stories they perhaps haven't had the chance to familiarise with.

    The question in following this approach then becomes how to manage the difference between the original and the re-estimated numbers. For this purpose, I've tended to fall on the practice of keeping a separate "bucket" story that gives out points when the estimate goes up, and recollects the points when the estimate goes down, keeping the overall number of points for the project the same - assuming your trend in going over the original is equal to the trend in going under the original. From experience this has been mostly true, with the understanding that major leaps of scope actually become new stories added to the backlog and a change in the overall number (and a corresponding shift in the macro view).

    In taking this approach I found I can get the best part of re-estimation (collective team ownership of estimates, work being picked up by the most effective/effecient team member, and good prior visibility of what the iteration holds) as well as a stable view of the end goal that doesn't shift, outside of major changes to scope that are covered by new stories and easily reflected for consideration to the business. For me this is a good balance between the micro and the macro views of estimation and re-estimation, with some added team benefits thrown in. I understand this approach won't be for everyone though.

    ReplyDelete
  2. Thanks for the comment. I was approaching the problem from more the perspective of a stable running team, but you're right that there's a whole nother set of factors to consider when we have significant turnover.

    I definitely agree that it's not good for a team to try and run with "someone else's" estimates - it can really deflate the sense of ownership.

    I can see why your "bucket of points" is useful - it keeps the team from starting completely over and re-scaling all our estimates (we went from 150 points to 300 points when we re-restimated because the new team decided the meaning of a "2" was different).

    The one thing I'd wonder about is that, even if the team is different, we're sort of forcing them to the original team's idea of the "total scope." If the team has changed substantially, should we expect that agreement? Related question - do you expect the new team to have substantially the same velocity as the old team, or expect them to have a new one? And if the velocity changes, is keeping the total scope the same important?

    I like having continuity on the "scale" between the old and new teams. I just wonder if keeping the scope line in the exact same place is setting the (possibly wrong) expectation that the time to complete the project will be unchanged even with major turnover.

    ReplyDelete
  3. As a project manager i trust Scrum more than PMP. However , it also depends on the kind of project, kind of people you are working with etc.

    ReplyDelete