Previous month:
February 2013
Next month:
May 2013

Agile for Fragmented, Part-time Teams

VolunteersI have a client who uses lean and agile-like processes outside of IT on research and development projects. They have been doing this for a number of years to help optimize constrained tools (drilling platforms) and resources (specialist inspectors). They like the agile concepts of prioritizing based on business value, working in short cycles, expediting rush jobs and frequently validating results and adaptation.

Recently they asked for help with some improvement initiatives that use multi-disciplinary teams to investigate and improve cross-department processes. These groups are staffed by senior engineers who volunteer to help make improvements, but the work is low priority and their time extremely limited. They are also geographically dispersed. Obviously that creates problems for agile practices like daily standups if team members get on average of two to four hours per week to contribute on an initiative.

At first I saw lots of challenges--agile promotes dedicated teams (co-location where possible), daily conversations with business stakeholders, etc. These groups had none of those things, yet three months later they were pleased with the successes they had. It seems when you are trying to coordinate the work effort of distributed, low-availability resources, the structure and visibility of tasks that agile brings is a great strength.

This somewhat counter-intuitive application makes more sense when you consider how such improvement committees traditionally function. Historically, similar work groups had faltered and failed to deliver benefits. The company was mature enough to look for inter-departmental improvement opportunities, but because it was no one’s full-time job (and they spanned departmental jurisdictions), work started but then failed.

Continue reading "Agile for Fragmented, Part-time Teams" »


Learning Analytics

ClickerProfessional athletes watch slow motion video of their performances to find areas for improvement. Armed with this information they can then work on these weaknesses and improve their performance. When studying for an exam how do you objectively measure your skills acquisition and areas of weakness that need to be worked on? Practice tests can help, especially if the questions are categorized into knowledge areas so we can tell which topics candidates understand and which they need more work on.

As a trainer I am also trying to get feedback from the group on whether people understand what I am talking about. I ask them of course, using questions like:  “Does this make sense?”, “Are there any questions on this?”, but I never really know. Cultural norms vary considerably, do polite nods and no questions mean am I preaching to the choir and they know all this stuff already, or they just don’t want to ask questions?

I recently started incorporating audience response systems (clickers) into my training courses, and while no silver bullet, they do provide useful objective feedback. I introduced them so that participants on my PMI-ACP Exam Prep course could answer end of module practice exam questions and get personal reports of how they did to help their study plan.

However the benefits go further, as a trainer I can poll the group with a quick question and if everyone gets it right move right along. Like Fist of Five voting a quick confirmation allows us to move efficiently, but if there is confusion or division of opinion then we can investigate and go deeper into topics. No longer do I have to decide if blank stares mean consent or incomprehension of my accent, now I have some hard data.

It allows for some fun games too, like prizes for most right answers, fastest responders, fastest correct responders, etc. Obviously leader boards just show the top 3 or so people, it is counter productive to show the lower part of ranked lists.

Using these tools we can provide detailed individual analysis of question responses that would otherwise require invasive supervision. Not only which categories did you score the highest and lowest on, but which questions you took the longest to answer, or changed you mind on the answer to select. This meta data helps target follow up studying for participants and also provides me with some useful feedback as I teach.

I used the system live for the first time last week in Bucharest, Romania and will be using them again for my Calgary course next week.

ACP Results 1
ACP Results 2
ACP Results 3


An Antidote to Velocity Obsession

Agile velocityGetting things done is great; to get things done is why we start things in the first place and why we follow through even when presented with obstacles and setbacks. We do things because they will (hopefully) bring us to some better state. So getting these things done quickly is good because we arrive at this better state sooner.  We track our rate of development (velocity) as a useful measure of progress and also as a leading indicator towards when we should be done. However focussing too much on velocity is dangerous; it leads to myopic mindsets and even moronic behaviour.

Yes, velocity is good, but not at the expense of quality, good-will, or noticing subtle changes in direction. At the Agile 2012 Conference Jim Highsmith and Pat Reed hosted a session called “Velocity is Killing Agility” which examined how velocity (which should be as much a measure of team capacity as it is a measure of their output) is being misused. When organizations overly publicize and analyze velocity, misguided attempts to “Go Faster” lead to gaming velocity scores and not project team improvements.

A Measurement Parallel

For the last 6 months I have been using Strava.com to track my running and biking exercise. It is a social web site for tracking and sharing workout performance data that creates maps, leader boards of hills climbed, point-to-point fastest times, etc. Using your phone or GPS device while out running or riding your performance is automatically recorded and then uploaded and compared to everyone else that has ever covered the same route. Individual rides and runs become virtual races against people you have never met. After posting the fastest time for a segment Strava will send you emails such as “Uh Oh, <fast guy’s name> just beat your record on Heartbreak Hill, go out there and get it back!” It can all get pretty competitive and silly if taken too far.

I have found Strava to be a fun, addictive work-out analysis tool that has led to a few special outings just to retake some records back and generally push harder to beat my own previous times. I have also met a few new people who run and bike locally and found some new trails by looking at the maps of where people train.  The trouble with obsessing on getting the fastest times for segments is that it can drive stupid, myopic behaviour. Stories of people barrelling down trails on mountain bikes at crazy speeds yelling “Strava, get out the way!” at people are getting more common.” Similarly, if you can’t ride the last technical descent on “Coal Chutes Drop” – then just throw your phone over the finish line and you should get a better time!”

Continue reading "An Antidote to Velocity Obsession" »