Recent recommendations

I’ve recently attended a couple of thought-expanding conferences and met many inspiring people (including a few back at the Asynchrony St. Louis office), so I’m sharing some of the recommendations that I’ve picked up from keynotes, sessions, hallway chats and pub discussions. Here are a few things that people have been talking about, some of which have been around for a few years:


Make risks visible with “RAID bingo”

IMAG5949IMAG5950To help teams mitigate risks and other concerns — and make them explicit and visible — I like to do a RAID brainstorm, either at a kickoff/inception and/or at points after a project has started. (RAID is an acronym for Risks, Assumptions, Issues and Dependencies.) As with any sort of group brainstorming, the key is to facilitate cognitive diversity by allowing individuals to come up with as many ideas without being biased or inhibited by groupthink.
One technique I use to draw out as many unique ideas — and avoid the boredom and tedium that usually come with talking about risks — is RAID Bingo. Here’s how you do it:
  1. Divide the group into multiple small teams of 3-5 people.
  2. Have each team draw on a wall/large-poster sheet a 4×4 grid* with column headings of R, A, I and D.
  3. Instruct the teams to write risks, assumptions, issues and dependencies on post-it notes and place them in the appropriate columns as fast as possible. The goal is to be the first team to have three items in any column or four in any row. The first team to do so should shout “Bingo!”
  4. When a team shouts Bingo, tell all teams to pause. Invite the winning team to announce and briefly describe each of their items.
  5. Resume play! Have the teams keep their post-it notes on the boards and continue to try to get a(nother) full column or row.
  6. The next team to shout Bingo must have all unique items (they cannot have the same items that the first winning team used).
  7. Repeat until the teams have covered most of their boards (usually by the third “Bingo”). Then do the final Bingo “coverall” — first team to cover all squares wins.
Unified, final RAID list

Unified, final RAID list

After you’ve generated many unique project concerns, unify all teams’ contributions into one board. At this point, you can discuss them in more depth or simply defer the discussion and mitigation strategy until later (but hopefully not too much later!).

*Depending on time available and the number of people in the group, and therefore the number of small Bingo teams, you may choose to make the columns bigger (four blank squares for a smaller group) or shorter (three blank squares for three or more teams or less time).

The New “Three Questions”

Scrum gave us the Three Questions to help structure discussion at daily standup. These questions provide some idea of micro-goal setting and accountability for each team member and can be a healthy practice:

  • What have you completed since the last meeting?
  • What do you plan to complete by the next meeting?
  • What is getting in your way?

For teams who are increasingly focusing on optimizing flow or teams who have simply fallen into a pattern of rote repetition and are in need of a fresh approach, I offer what you might call “the new three questions,” inspired by Mike Burrows in his book Kanban from the Inside:

  • How can we improve flow today?
  • What is blocked and why?
  • Where are bottlenecks forming?

A colleague observed that those questions sound like a mini retrospective, which is not a bad analogy insofar as they are about improvement, though perhaps not as backward facing; they focus on the present and near-future reality. They’re about making a plan to improve flow, with the scope being merely a day. I like the questions because they orient the team toward the work, rather than the worker. For teams that already follow the practice of making work visible, the new three questions are a natural complement to “walking the wall.” Furthermore, the answers to these questions over time can inform the conversation at operations review and risk review, helping the team analyze their work-in-progress limits and blocker clusters.

Like any practice, without attention to the “why” and context, they can lead to to mindless repetition. But if flow is important to you — and it should be — “the new three questions” can help you improve it with a simple twist on an old reliable pattern.


Book Review: Actionable Agile Metrics for Predictability

Screen Shot 2015-04-29 at 6.46.56 PMDaniel Vacanti’s new book, Actionable Agile Metrics for Predictability, is a welcome addition to the growing canon of thoughtful, experience-based writing on how to improve service delivery. It joins David Anderson’s (Kanban: Successful Evolutionary Change for Your Technology Business) and Mike Burrows’s (Kanban from the Inside) books in my list of must-reads on the kanban method, complementing those works with deeper insight into how to use metrics to improve flow.

Daniel’s message about orienting metrics to promote predictable delivery and flow — which he defines as “the movement and delivery of customer value through a process” — is primarily grounded in his experience helping Siemens HS. He includes the case study (which has been published previously and is valuable reading in itself) at the end of the book, so he keeps the rest of the book free from too many customer references, though he’s drawing on the pragmatic experience.

As someone who for several years has been helping teams and organizations improve using the metrics Daniel talks about, I learned a tremendous amount. One of the reasons is that Daniel is particularly keen to clarify language, which I appreciate not only as a former English major (nor as a pedant!), but because it helps us carefully communicate these ideas to teams and management, some of whom may be using these metrics in suboptimal ways or, worse, perverting them so as to give them a bad name and undermine their value. Some examples: The nuanced difference between control charts and scatterplots and clear definitions on Little’s Law (and violations thereof), especially as related to projections and cumulative flow diagrams. I certainly gained a lot of new ideas, and Daniel’s explanations are so thorough that I suspect even novice coaches, managers, team leaders and team members won’t be overwhelmed.

As for weaknesses, I felt that the chapter on the Monte Carlo method lacked the same kind of depth as the other chapters. And I came away wishing that Daniel had included some diagrams showing projections using percentiles from scatterplot data. But those are minor plaints for a book that constantly had me jotting notes in my “things to try” list.

Overall, I loved how Daniel pulled together (no pun intended), for the purpose of flow, several metrics and tools that have often been independently implemented and used and whose purpose— in my experience — was not completely understood. The book unifies these and helps the reader see the bigger picture of why to use them in a way I had not seen before. If you’re interested in putting concepts and tools like Little’s Law, cumulative flow diagrams, delivery-time scatterplots and pull policies into action, this book is for you.

Other observations:

  • The book has a very helpful and clarifying discussion of classes of service, namely the difference between using CoS to commit to work (useful) and using it to prioritize committed work (hazardous for predictability).
  • It also had a particularly strong treatment of cumulative flow diagrams.
  • Daniel does a lot of myth debunking, which I appreciate. Examples: work items need to be of the same size, kanban doesn’t have commitments.
  • The tone is firm and confident — you definitely know where Daniel stands on any issue — without being strident.

How We’re Using Blocker Clustering to Improve

IMAG4924I’ve been helping a team at Asynchrony improve using blocker clustering, a technique popularized by Klaus Leopold and Troy Magennis (presentation, blog post) that leverages a kanban system to identify and quantify the things that block work from flowing. It’s premised on the idea that blockers are not isolated events but have systematic causes, and that by clustering them by cause (and quantifying their cost), we can improve our work and make delivery more predictable.

The team recently concluded a four-week period in which they collected blocker data. At the outset of the experiment, here’s what I asked a couple of the team leaders to do:

  • Talk with your teammates about the experiment
  • Define “block” for your team
  • Minimally instrument your kanban system to gather data, including the block reason and duration

The first two were relatively simple: The team was up for it, and they defined “blocker” as anything that prevented someone from doing work if he had wanted to. “Instrumenting the system” wasn’t as easy as it could’ve been, because the team uses a poorly implemented Jira instance, so they went outside the system and used post-it notes on a physical wall. They then kept a spreadsheet with additional data (duration, reason) to tie the blockers back to their Jira cards.

Over the next four weeks, the team collected 19 blockers, placing each post-it note on the wall in either an “internal” (caused by them) or “external” (caused by something outside the team, including customer and dependent systems) column. We then gathered in a conference room to convene a blocker-analysis session to:IMAG4926

  • further cluster the blockers into more discrete categories
  • calculate how many days of delay (the enemy of flow!) the blockers caused
  • root-cause the internal categories
  • find out where to focus improvement efforts

The analysis session was eye-opening. We started with the two main columns (internal, external) and as we quickly discussed each blocker, sub-categories, such as “dependency” and “waiting on info” emerged. Within minutes, we were able to see — and quantify the delay cost — of the team’s most egregious blockers.

IMAG4929Some insights that the team came away with:

  • internal blockers caused 20 days worth of delay

  • external blockers caused 147 days worth of delay

  • the biggest blocker cluster accounted for 86 days of delay

That biggest blocker cluster now allows the team to have a conversation with the customer that goes something like this: “Over a four-week period, we had three blockers in this area. If this continues, that means you have a 75% chance each week of creating a blocker that costs you an average of 29 days. Is this acceptable to you?”

Ultimately, it may indeed be acceptable. But the customer is now aware of the approximate cost of problems and can manage risk in an informed way.

IMAG4927For the internal blockers, we conducted a root-cause analysis (using fishbone technique, though I’ll admit my “fish” leaves something to be desired!). So the team can go forward to address both external blockers (through a conversation with the customer) and the internal blockers (through their decisions).

Other lessons learned:

  • Some blockers turned out to be simply time spent chasing information about backlog items rather than true blockers of committed work, so the team added “… for committed work” to their blocker definition. (It’s important to understand your commitment point.)
  • Depending on how you want to address blockers, you might choose to sort them differently. For example, the team considered sorting its external blockers not by source but by which customer contact was responsible.

Kanban Iceberg Poster

Kanban Iceberg poster

Kanban Iceberg poster


What is Autonomy Support?

I wrote a few weeks ago about the advocacy program, our distributed peer-to-peer continuous-improvement program. One of the important components of the program is autonomy support. But what is that? As Daniel Pink notes in his book Drive:

Researchers found greater job satisfaction among employees whose bosses offered “autonomy support.” These bosses saw issues from the employee’s point of view, gave meaningful feedback and information, provided ample choice over what to do and how to do it, and encouraged employees to take on new projects.

In the advocacy program, autonomy-support meetings are an optional opportunity for employees to meet with executive management to give feedback on how the executive leaders can help the employee realize career goals in the organization. The meeting can be scheduled by the employee’s advocate, who also can be part of the meeting, acting as an intermediary or ambassador for the employee to the manager(s). Multiple managers may be part of the meeting, depending on which ones the advocate and employee feel are vital and able to help.

The dynamic should be one in which the traditional organizational structure is flipped upside-down:

Autonomy-support meetings

Therefore, rather than the traditional dynamic of the employee “working for” the manager, in the autonomy-support meeting the servant-leader — in this case, the role of executive leader — should have the mindset of “working for” the employee.

A good starting point for the discussion is the “autonomy-support feedback for executive leaders” section of the employee’s review. Basically, it’s whatever the employee needs executive leaders to do in order to do his or her job better or reach goals. This might be a request for a different project or role switch, more time to explore a particular skill or technology or simply clearer vision or expectations set. Premised on the executive leader’s commitments to the employee, the employee has the right to ask for the executive leader for support in various career-development goals, including timelines for when those things would occur.

Questions that the executive leader might want to ask:

  • How can I help you realize your goals in the next year?
  • By when would you like me to achieve these things for you?
  • In what areas have I failed to help you in the past, and how can I improve?
  • What kind of things would help you feel more engaged?
  • How can I help smooth your path toward mastery of certain skills?
  • What does success look like for you, and how can I help you succeed?