Lifecycle Variables
Agile Organizations

Batch Size and Velocity Fluctuations

Batch Sizes I recently wrote a post on Velocity Signature Analysis and have been looking at how undertaking large chunks of work as a complete team impacts velocity. We are currently three quarters of the way through a major (4 months long) piece of functionality and velocity is finally rising. This seems a pattern; for the early portion of a new area of work we spend a lot of time understanding the business domain and checking our interpretation using mock-ups and discussions. Velocity, in terms of functionality built and approved by the business is down during this time since many of the team members are involved in understanding the new business area rather than cranking out code.

As project manager I can get jittery, did we estimate this section of work correctly? Our average velocity for the last module was 60 points per month and now we are only getting 20! Weeks and weeks go by as whiteboards get filled, designs get changed, but the tested story counts hardly moves. Compounding this Discovery Drain phenomenon is the Clean-up Drain pattern. During the early portions of a new phase, fixing up the niggling issues hanging over from the last phase seems to take a long time. This makes perfect sense, if they were easy they would probably have been done earlier. It is always the difficult to reproduce bug, the change request that necessitates a rework of established workflow or multiple stakeholder collaboration that seem to bleed into the next development phase. While there may only be 3 or 4 bugs or change requests hanging over, they take a disproportionate amount of time to resolve.

Unaligned Team 

I sometimes use a booster rocket analogy for illustrating team cohesion and vision. When team members are not aligned with a common project goal, their individual motivations can result in a suboptimal team vector. By aligning team member efforts through common goals and a way for people to grow and get something valuable for themselves by making the project successful, we align individual vectors and produce a much greater project vector.


Aligned Team   

There is a parallel with project velocity too. If 30% of the team’s capacity is consumed on better understanding a complex business domain and 30% of the team’s capacity is spent fixing bugs and change requests for which we may earn little velocity credit, then that only leaves 40% left for raw velocity earning development.

SplitTeam   

When everyone is focused on iteration features the velocity increases. 
FocusedTeam

As these tasks are completed effort can be returned to development and velocity increases. The process leads to lumpy throughput, but seems preferable to the alternatives. We could let our BA’s run ahead with analysis, filling the hopper with story outlines ready for consumption by the development team. We do this slightly, but are conscious of not letting it get too far since we lose the whole team focus on tasks and experience Pipelining Problems.

Pipeline   

If the QA and developers are not present for at least the major analysis conversations we lose valuable insights, time saving suggestions, and create the need to reiterate points later. If business users and BA’s are too far ahead then when development questions or bugs arise that need their input there is a task switching overhead as they “park” their current work, reorient themselves in the task at question and help solve the problem. So instead work is undertaken in vertical slices, conducted by the majority of the team.

Team Focus   

Like everything it is a balancing act, we want to exploit role specialization when it brings advantages, but see the benefit of a multi disciplined team tackling and driving through to user acceptance on discreet units of work. So, rather than a smooth flow of stories through the production process, we get some slow downs and speed ups as the team collectively takes on chunks of learning and then delivery.
 
Velocity Oscilations

Lean production systems teach us that smaller batches can be a way to smooth throughput. If we could find a way to structure the project into smaller chunks rather than 3 or 4 month long modules then these peaks and troughs would be smoothed out and velocity as a whole increased. Either this is not possible in our project domain, or more likely, I have not been able to find a way to do this yet. Our business domain is complex and naturally divides into chunks. We are replacing a suite of legacy applications and as we finish replacing once application, disconnect its interfaces and move our focus to the next one, we experience the learning cycle and tidy up issues described earlier.

I suspect this is a function of our project - which is really a program of application replacements. So, rather than get overly concerned with the oscillations in velocity, we can just zoom out some more and say, overall our velocity averages 45 points per month. Yet given this is a 4 year program there are millions of dollars of difference between our forecasted end date and spend between the best, average and worst velocities experienced per module.

Yesterday’s Weather
So is the XP term “yesterday’s weather” really a good indicator? Can we use recent velocity to predict future velocity? I believe so, we have to allow for explainable variations, but estimation based on what has been proved to be achievable seems fairer than speculation on what we expect or would like to happen (traditional planning). It is just that sometimes the weather is a little changeable. Like hear in Calgary at the moment, where on last Tuesday we were able to go running in shorts on a sunny +12C day and by Thursday we were wrapped up running in the snow with -25C wind-chill. However, on average, I predict the weather for February to be about -5C to -10C, probably.

Comments

Luke

Interesting post Mike. I've grappled with this problem as well: how to maintain some sense of velocity rhythm given work like bugs, change requests, new learning activities? I think the approach to maintaining a velocity rhythm lies in assigning the appropriate priority and estimates to all of the work to be undertaken: modeling, bugs, change requests (if treated separately from "stories" - as is often the case in a trad env) with increased estimates for the work (models, bugs, change requests, stories etc) that will likely require significant new learning (could view work with significant up-front learning as spikes with points for increased complexity). So, the number of stories or features done per iteration will vary, but the amount of work per iteration (points or whatever the preferred measure)is somewhat consistent.

Mike Griffiths

Hi Luke,

Thanks for your feedback, I agree that you want to track (via points or whatever) all kinds of work to better monitor progress internally, even if external business feature based completeness does not seem to be moving that quickly.

I did not go into in my post because it is a little complex, but we actually have two sets of points: developer points and vendor points. Our vendor points are business functionality based, these are what I report on externally and what frequently slow down as changes, bug fixes and learnings occur. They are largely fixed, if a new high priority change request comes through we trade off business priority within our limited total capacity for the project.

However, our developer points are for internal consumption and are created for every bug and change we undertake. Tracking developer points we can see what we are busy on and estimate the work for an iteration, even when we do not get many vendor points done.

This is not my preferred approach, I inherited the project part way through and think the switch to a new consolidated metric would not be worth the disruption right now. I would prefer to see a single, transparent estimated backlog of features with bugs and change requests prioritized amongst the functionality.

Anyway, thanks again for your comment, I believe you are right and that creating estimates for the additional work really helps illustrate a more consistent velocity. As for whether it allows you to more accurately predict final completion time or not, I think is a different matter. That would assume a consistent percentage of work dedicated to changes and the like throughout the project which (in our case) is hard to predict.

Best regards
Mike

The comments to this entry are closed.