- Implement in the Learning Team the good practices you hear about in Agile Overview, then
- Implement in your future customer-facing team the good practices you experience in the Learning Team.
For new hires and veteran AsynchronitesWho need to learn the Asynchrony delivery “way,” brush up on skills or learn new onesThe Learning Team is an internal delivery teamThat provides a safe place to learn proven practices and experiment with new onesUnlike being thrown straight into a team without knowing what great delivery practices look likeThe Learning Team is an exemplar team where people can learn healthy practices that they can take to other teams.
I’ve recently attended a couple of thought-expanding conferences and met many inspiring people (including a few back at the Asynchrony St. Louis office), so I’m sharing some of the recommendations that I’ve picked up from keynotes, sessions, hallway chats and pub discussions. Here are a few things that people have been talking about, some of which have been around for a few years:
- Antifragile (book), ALEConf
- Dixit (game), ALEConf
- Hanabi (game), ALEConf
- Thinkertoys, (book), ALEConf
- Indxd.ink (tool), Alison Hawke, Asynchrony
- IDEO Deep Dive (video), Rich Sheridan, Lean Agile Scotland
- Wardley (value chain) Mapping (technique), Will Evans, Lean Agile Scotland
- Barry-Wehmiller (organization), Rich Sheridan, Lean Agile Scotland
- Zara, My Agile Role Model (video), Clarke Ching, Lean Agile Scotland
- Amy Cuddy TED talk on body language (video), Chris Matts, Lean Agile Scotland
- Temple Grandin (movie), Sal Freudenberg, Lean Agile Scotland
- Crucial Conversations (book), Clarke Ching, Lean Agile Scotland
- Made to Stick (book), Clarke Ching, Lean Agile Scotland
- The Goal (book), Clarke Ching, Lean Agile Scotland
- The Phoenix Project (book), Clarke Ching, Lean Agile Scotland
- Strategy Deployment Canvas (technique), Matt Barcomb and Cat Swetel, Lean Agile Scotland
- Pawel Brodzinski’s flow-based road mapping (technique), via Matt Barcomb, Lean Agile Scotland
- Featureban game (activity), Mike Burrows
- Divide the group into multiple small teams of 3-5 people.
- Have each team draw on a wall/large-poster sheet a 4×4 grid* with column headings of R, A, I and D.
- Instruct the teams to write risks, assumptions, issues and dependencies on post-it notes and place them in the appropriate columns as fast as possible. The goal is to be the first team to have three items in any column or four in any row. The first team to do so should shout “Bingo!”
- When a team shouts Bingo, tell all teams to pause. Invite the winning team to announce and briefly describe each of their items.
- Resume play! Have the teams keep their post-it notes on the boards and continue to try to get a(nother) full column or row.
- The next team to shout Bingo must have all unique items (they cannot have the same items that the first winning team used).
- Repeat until the teams have covered most of their boards (usually by the third “Bingo”). Then do the final Bingo “coverall” — first team to cover all squares wins.
After you’ve generated many unique project concerns, unify all teams’ contributions into one board. At this point, you can discuss them in more depth or simply defer the discussion and mitigation strategy until later (but hopefully not too much later!).
Scrum gave us the Three Questions to help structure discussion at daily standup. These questions provide some idea of micro-goal setting and accountability for each team member and can be a healthy practice:
- What have you completed since the last meeting?
- What do you plan to complete by the next meeting?
- What is getting in your way?
For teams who are increasingly focusing on optimizing flow or teams who have simply fallen into a pattern of rote repetition and are in need of a fresh approach, I offer what you might call “the new three questions,” inspired by Mike Burrows in his book Kanban from the Inside:
- How can we improve flow today?
- What is blocked and why?
- Where are bottlenecks forming?
A colleague observed that those questions sound like a mini retrospective, which is not a bad analogy insofar as they are about improvement, though perhaps not as backward facing; they focus on the present and near-future reality. They’re about making a plan to improve flow, with the scope being merely a day. I like the questions because they orient the team toward the work, rather than the worker. For teams that already follow the practice of making work visible, the new three questions are a natural complement to “walking the wall.” Furthermore, the answers to these questions over time can inform the conversation at operations review and risk review, helping the team analyze their work-in-progress limits and blocker clusters.
Like any practice, without attention to the “why” and context, they can lead to to mindless repetition. But if flow is important to you — and it should be — “the new three questions” can help you improve it with a simple twist on an old reliable pattern.
Daniel Vacanti’s new book, Actionable Agile Metrics for Predictability, is a welcome addition to the growing canon of thoughtful, experience-based writing on how to improve service delivery. It joins David Anderson’s (Kanban: Successful Evolutionary Change for Your Technology Business) and Mike Burrows’s (Kanban from the Inside) books in my list of must-reads on the kanban method, complementing those works with deeper insight into how to use metrics to improve flow.
Daniel’s message about orienting metrics to promote predictable delivery and flow — which he defines as “the movement and delivery of customer value through a process” — is primarily grounded in his experience helping Siemens HS. He includes the case study (which has been published previously and is valuable reading in itself) at the end of the book, so he keeps the rest of the book free from too many customer references, though he’s drawing on the pragmatic experience.
As someone who for several years has been helping teams and organizations improve using the metrics Daniel talks about, I learned a tremendous amount. One of the reasons is that Daniel is particularly keen to clarify language, which I appreciate not only as a former English major (nor as a pedant!), but because it helps us carefully communicate these ideas to teams and management, some of whom may be using these metrics in suboptimal ways or, worse, perverting them so as to give them a bad name and undermine their value. Some examples: The nuanced difference between control charts and scatterplots and clear definitions on Little’s Law (and violations thereof), especially as related to projections and cumulative flow diagrams. I certainly gained a lot of new ideas, and Daniel’s explanations are so thorough that I suspect even novice coaches, managers, team leaders and team members won’t be overwhelmed.
As for weaknesses, I felt that the chapter on the Monte Carlo method lacked the same kind of depth as the other chapters. And I came away wishing that Daniel had included some diagrams showing projections using percentiles from scatterplot data. But those are minor plaints for a book that constantly had me jotting notes in my “things to try” list.
Overall, I loved how Daniel pulled together (no pun intended), for the purpose of flow, several metrics and tools that have often been independently implemented and used and whose purpose— in my experience — was not completely understood. The book unifies these and helps the reader see the bigger picture of why to use them in a way I had not seen before. If you’re interested in putting concepts and tools like Little’s Law, cumulative flow diagrams, delivery-time scatterplots and pull policies into action, this book is for you.
- The book has a very helpful and clarifying discussion of classes of service, namely the difference between using CoS to commit to work (useful) and using it to prioritize committed work (hazardous for predictability).
- It also had a particularly strong treatment of cumulative flow diagrams.
- Daniel does a lot of myth debunking, which I appreciate. Examples: work items need to be of the same size, kanban doesn’t have commitments.
- The tone is firm and confident — you definitely know where Daniel stands on any issue — without being strident.
I’ve been helping a team at Asynchrony improve using blocker clustering, a technique popularized by Klaus Leopold and Troy Magennis (presentation, blog post) that leverages a kanban system to identify and quantify the things that block work from flowing. It’s premised on the idea that blockers are not isolated events but have systematic causes, and that by clustering them by cause (and quantifying their cost), we can improve our work and make delivery more predictable.
The team recently concluded a four-week period in which they collected blocker data. At the outset of the experiment, here’s what I asked a couple of the team leaders to do:
- Talk with your teammates about the experiment
- Define “block” for your team
- Minimally instrument your kanban system to gather data, including the block reason and duration
The first two were relatively simple: The team was up for it, and they defined “blocker” as anything that prevented someone from doing work if he had wanted to. “Instrumenting the system” wasn’t as easy as it could’ve been, because the team uses a poorly implemented Jira instance, so they went outside the system and used post-it notes on a physical wall. They then kept a spreadsheet with additional data (duration, reason) to tie the blockers back to their Jira cards.
Over the next four weeks, the team collected 19 blockers, placing each post-it note on the wall in either an “internal” (caused by them) or “external” (caused by something outside the team, including customer and dependent systems) column. We then gathered in a conference room to convene a blocker-analysis session to:
- further cluster the blockers into more discrete categories
- calculate how many days of delay (the enemy of flow!) the blockers caused
- root-cause the internal categories
- find out where to focus improvement efforts
The analysis session was eye-opening. We started with the two main columns (internal, external) and as we quickly discussed each blocker, sub-categories, such as “dependency” and “waiting on info” emerged. Within minutes, we were able to see — and quantify the delay cost — of the team’s most egregious blockers.
internal blockers caused 20 days worth of delay
external blockers caused 147 days worth of delay
- the biggest blocker cluster accounted for 86 days of delay
That biggest blocker cluster now allows the team to have a conversation with the customer that goes something like this: “Over a four-week period, we had three blockers in this area. If this continues, that means you have a 75% chance each week of creating a blocker that costs you an average of 29 days. Is this acceptable to you?”
Ultimately, it may indeed be acceptable. But the customer is now aware of the approximate cost of problems and can manage risk in an informed way.
For the internal blockers, we conducted a root-cause analysis (using fishbone technique, though I’ll admit my “fish” leaves something to be desired!). So the team can go forward to address both external blockers (through a conversation with the customer) and the internal blockers (through their decisions).
Other lessons learned:
- Some blockers turned out to be simply time spent chasing information about backlog items rather than true blockers of committed work, so the team added “… for committed work” to their blocker definition. (It’s important to understand your commitment point.)
- Depending on how you want to address blockers, you might choose to sort them differently. For example, the team considered sorting its external blockers not by source but by which customer contact was responsible.