Service-Delivery Review: The Missing Agile Feedback Loop?

I’ve been working for many years with software-delivery teams and organizations, most of which use the standard agile feedback loops. Though the product demo, team retrospective and automated tests provide valuable awareness of health and fitness, I have seen teams and their stakeholders struggle to find a reliable construct for an important area of feedback: the fitness of their service delivery. I’m increasingly seeing that the service-delivery review provides the forum for this feedback.

What’s the problem?

Software delivery (and knowledge work in general) consists of two components, one obvious — product — and one not so obvious — service delivery.  I’ve often used the restaurant metaphor to describe this: When you dine out, you as the customer care about the food and drink (product) but also how the meal is delivered to you (service delivery). That “customer” standpoint is one dimension of the quality of these components — we might call it an external view. The other is the internal view — that of the restaurant staff. They, too, care about the product and service delivery, but from a different view: Is the food fresh, kept in proper containers, cooked at the right temperatures, and do the staff work well together, complement each other’s skills, treat each other respectfully (allowing for perhaps the occasional angry outburst from the chef, excusable on account of “artist’s temperament”!). So we have essentially two pairs of dimensions: Component (Product and Service Delivery) and Viewpoint (External and Internal).
In software delivery, we have a few feedback loops to answer three of four of these questions and have more-colloquial terminology for that internal-external dimension (“build the thing right” and “build the right thing”):
The problem is that we typically don’t have a dedicated feedback loop for properly understanding how fit for purpose our service-delivery is. And that’s often equally the most vital concern for our customers — sometimes even more important than the fitness of the product, depending on whether that’s the concern of a delivery team or someone else. (One executive sponsor that I worked with noted that he would rather attend a service-delivery review than a demo.) We may touch on things like the team’s velocity in the course of a demo, but we lack a lightweight structure for having a constructive conversation about this customer concern with the customer. (The team may discuss in a retrospective ways to go faster, but without the customer, they can’t have a collaborative discussion about speed and tradeoffs, nor about the customer’s true expectations and needs.)

A Possible Solution

The kanban cadences include something called a Service-Delivery Review. I’ve been incorporating this to help answer teams’ inability to have the conversation around their service-delivery fitness, and it appears to be providing what they need in some contexts.
David Anderson, writing in 2014, described the review as:
Usually a weekly (but not always) focused discussion between a superior and a subordinate about demand, observed system capability and fitness for purpose Comparison of capability against fitness criteria metrics and target conditions, such as lead time SLA with 60 day, 85% on-time target Discussion & agreement on actions to be taken to improve capability
The way that I define it is based on that definition with minor tweaks:
A regular (usually weekly) quantitatively-oriented discussion between a customer and delivery team about the fitness for purpose of its service delivery.
In the review, teams discuss any and all of the following (sometimes using a service-delivery review canvas):
  • Delivery times (aka Cycle/Lead/Time-In-Process) of recently completed work and tail length in delivery-time distribution
  • Blocker-clustering results and possible remediations
  • Risks and mitigations
  • Aging of work-in-progress
  • Work-type mix/distribution (e.g., % allocation to work types)
  • Service-level expectations of each work item type
  • Value demand ratio (ratio of value-added work to failure-demand work)
  • Flow efficiency trend
These are not performance areas that teams typically discuss in existing feedback loops, like retrospectives and demos, but they’re quite powerful and important to having a common understanding of what’s important to most customers — and, in my experience, some of the most unnecessarily painful misunderstandings. Moreover, because they are both quantitative and generally fitness-oriented, they help teams and customers build trust together and proactively manage toward greater fitness.

Service-delivery reviews are relatively easy to do, and in my experience provide a high return on time invested. The prerequisites to having them are to:

  1. Know your services
  2. Discover or establish service-delivery expectations

Janice Linden-Reed very helpfully outlined in her Kanban Cadences presentation the practical aspects of the meeting, including participants, questions to ask and inputs and outputs, which is a fine place to start with the practice.

Afterward #1: In some places I’ve been, so-called “metrics-based retrospectives” have been a sort of precursor to the service-delivery review, as they include a more data-driven approach to team management. Those are a good start but ultimately don’t provide the same benefit as a service-delivery review because they typically don’t include the stakeholder who can properly close the feedback loop — the customer.

Afterward #2: Andy Carmichael encourages organizations to measure agility by fitness for purpose, among other things, rather than practice adoption. The service-delivery review is a feedback loop that explicitly looks at this, and one that I’ve found is filling a gap in what teams and their customers need.

Afterward #3: I should note that you don’t have to be in the business of software delivery to use a service-delivery review. If you, your team, your group or your organization provides a service of any kind (see Kanban Lens and Service-Orientation), you probably want a way to learn about how well you’re delivering that service. I find that the Service-Delivery Review is a useful feedback loop for that purpose.

[Edited June 12, 2017] Afterward #4 (!):  Mike Burrows helpfully and kindly shared his take on the service-delivery review, which he details in his new book, Agendashift: clean conversations, coherent collaboration, continuous transformation:

Service Delivery Review: This meeting provides regular opportunities to step back from the delivery process and evaluate it thoroughly from multiple perspectives, typically:
• The customer – directly, via user research, customer support, and so on
• The organisation – via a departmental manager, say
• The product – from the product manager, for example
• The technical platform – eg from technical support
• The delivery process – eg from the technical lead and/or delivery manager
• The delivery pipeline – eg from the product manager and/or delivery manager

I include more qualitative stuff than you seem to do, reporting on conversations with the helpdesk, summarising user research, etc

Introducing: The NoEstimates Game

I’ve been play-testing a new simulation game that I developed, which I’m calling the NoEstimates Game. Thanks to my friends and colleagues at Universal Music Group, Asynchrony and the Lean-Kanban community (Kanban Leadership Retreat, FTW!), I’ve gotten it to a state in which I feel comfortable releasing it for others to play and hopefully improve.

The objective is to learn through experimentation what and how much different factors influence delivery time.

[Jan. 3, 2017 update: Game materials are now available on GitHub]

Download these materials in order to play:

If you’d like to modify the original game elements, here they are:

I’m releasing it under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, so please feel free to share and modify it, and if possible, let me know how I can improve it.

Kanban protips

Here are a few things I have observed about kanban that I didn’t know when I started:

  1. Although kanban roughly translates to “signal card,” the cards on a team wall are not the signals. The empty spaces are the signals (that you have capacity). The fixed capacity is represented by the spaces/slots, and not the cards themselves.
  2. It’s okay to distinguish cards in a queue as being “done” and not done. Only in a perfect cadence will you not need the concept of (and corresponding way to identify) some items in a queue being “done” and some not.
  3. Many value-stream maps show the customer need at the beginning of the cycle, but it’s useful to visualize the customer need at the end (perhaps as a circle) in order to show how demand is pulled through the system and not pushed.
  4. Cumulative-flow diagrams and run charts (showing cycle time for each story) are the most powerful and helpful metrics to use, and you can implement them by hand as big visible charts. They don’t need to be created in electronic tools.
  5. Be sure you visualize the entire value stream (“concept to cash,” including deployment) somewhere or you will simply optimize on your development cycle.
  6. To visualize a minimal marketable feature moving, create an MMF horizontal lane above the normal story cards or simply make a temporary horizontal lane for that feature.
  7. You don’t have to be “advanced” in agile or any other philosophy/methodology to do kanban. I used to believe this, but David Anderson (et al) have been helpful in conveying the idea that you start by modeling and making visible what you already do and improving from there.

Cumulative-flow diagram can double as retrospective timeline

If you’re using kanban in your environment, you probably use a cumulative-flow diagram. It’s a handy tool to track kanban metrics like cycle time and to quickly see bottlenecks. In addition to all the kanban goodness it gives you, it can also double as a timeline that you can use in your retrospectives.

Whether you use a physical version posted on your kanban board (like my current team does) or an electronic one, you can annotate dates with important events, such as when:

  • A team member joins or leaves
  • An unexpected technical problem surfaces, like a major refactoring or bug
  • The team decides to change something about its kanban, like increase a WIP limit
  • The team makes a positive change, like switching pairs more often

It’s pretty easy to do, especially if you have a physical chart that you update during your standup meeting.

Then, when you have a retrospective, bring the diagram along to help you remember what happened during the period over which you’re retrospecting. If you’re anything like the teams I’ve been on and like me, you have a hard time remembering what happened beyond yesterday, so it’s handy to have a reference. Having this time-based information will help you make more objective decisions about how to improve, since you won’t be guessing so much as to why your cycle time lengthened over the last week, or why you decided to decrease a WIP limit a month ago.

Kanban and linear work

Alan Shalloway is very helpfully documenting some Myths of Kanban. One myth that caught my eye in particular was “Kanban suggests linear work and requires too many handoffs.” I’ll be talking about this aspect of Kanban in my presentation on whole-team approach at the Agile and Beyond conference, so I liked what Alan wrote — as an aside, what he calls linear, I call sequential (contrasted with simultaneous):

Lean manufacturing may assume linear work, but not Lean software development (nor Kanban). Kanban boards may often appear to be linear but Kanban boards reflect the way people are doing the work. Hence, if people are working linearly, the board will reflect that. However, Kanban boards can also reflect non-linear work.  One must also recognize that a Kanban board does not reflect the people doing the work but rather reflects the flow of the work being done. Hence if a board looks like:

Backlog — Analysis — Design — Code — Test — Done

it is not suggesting that the work be done by different people and hand things off. It just shows the states the work progresses through. In this case, it’s on the backlog, in analysis, being designed, coded, tested or completed. It could very well be the same person doing the work.  The second misconception is that the board tells you to break things down into these steps.  Actually, it doesn’t (or shouldn’t). The board should be a reflection of the work being done. So different columns on the board should reflect the different steps the team is doing. If the team swarms on multiple steps, then the Kanban board should only have one column for that.  Essentially the Kanban board has one column for each type of work the team is doing.  The explicit policy for how it gets to that step and out of that step also needs to be stated, but is not necessarily written on the board.  Bottom line – if you don’t like the process that you see on the board, change it (and then update the board).  The board is there merely to reflect on your work so you can better change how you work.

I think Alan is right when he says, essentially, that the board is “identity-neutral” as to who is doing the work. The problem that I have seen is breaking out the queues in such a way as to encourage non-simultaneous work behavior (e.g., separating code and test). This is why I still prefer fewer queues to many.

FLAVE means more than just “visually outstanding”

Some acronyms are actually useful for remembering models or concepts. Lately, it seems that teams could use some help remembering some of the core principles of Kanban, so I’ll offer up my own acronym to help myself as much as anyone else: FLAVE.

The software-development world needs another acronym like it needs another methodology. Yet, if you’re anything like me — someone who loathes acronyms, by the way — you find that some acronyms are actually useful for remembering models or concepts (for instance, it’s easy to recall all of the elements of Bill Wake’s INVEST when creating stories).

Lately, it seems that teams could use some help remembering some of the core principles of Kanban, so I’ll offer up my own acronym to help myself as much as anyone else (definitions taken from David Anderson):

  • F | Measure and manage Flow: Track work items to see if they are proceeding at a steady, even pace.

  • L | Limit work-in-progress: Set agreed-upon limits to how many work items are in progress at a time.

  • A | Adapt the process: Adapt the process using ideas from Systems Thinking, W.E. Deming, etc.

  • V | Visualize the workflow: Represent the work items and the workflow on a card wall or electronic board.

  • E | Make process policies Explicit: Agree upon and post policies about how work will be handled.

The concepts aren’t in any particular order, except to create the acronym word. And don’t tell me that FLAVE isn’t a wordThis guy just doesn’t know how to spell it.

Regularly releasing potentially shippable software

Brian Marick wrote a couple of years ago that “Teams that don’t produce potentially shippable software at the end of each iteration are likely in trouble.”

With more and more teams using a kanban approach to developing software, it would seem that producing potentially shippable software on a regular basis would be more common. But is it? Does your team produce potentially shippable software at the end of each iteration? Why or why not? What can we do to make it the case?

Kanban requires a rigorous dedication to building software. If your “agile circumstances” are less than ideal — and really, how often do you have an ideal situation? — such as an unengaged customer, nebulous deliverables or uncertain deadlines, you need to be all the more rigorous. Build in practices that keep the team honest, like a regular demo (even if the customer doesn’t attend). I’ve seen too many teams burn themselves by waiting until the last week of the project to create a CI build server or see if they could cut a release. If the team releases potentially shippable software starting after the first week of the project and continuing regularly, they’ll save themselves a lot of headaches and reduce the risk of a nightmare end of the project.  And they’ll focus on giving their customer something of value each week, instead of what amounts to a bunch of work in progress at the end of the project.