Profiles in Cost-of-Delay: “Intangible Fixed-Date”

TL;DR
  • The “Big Four” cost of delay profiles (a.k.a. archetypes) — Expedite, Fixed-Date, Standard Urgency, Intangible — are usually sufficient.
  • In my personal kanban, I have seen a new archetype emerge, one that has aspects of both Intangible and Fixed-Date, which I’m calling “Intangible-Fixed-Date.”
  • This profile is less for the purpose of selection and more for scheduling.
  • So far, the main application of “Intangible-Fixed-Date” for me is for buying airfare, whose cost-of-delay curve fits none of the “Big Four” curves.
  • I use a couple of features in Kanbanize to deal with this new profile slightly differently from the other profiles, namely by creating a separate swim lane and using two dates (rather than one).
  • Takeaway: “Listen” to your data, patterns and behavior of work items and be flexible enough to adapt and create new profiles when they emerge.

I routinely use cost-of-delay to assist in scheduling, selecting and sequencing work, both in professional settings and in my own personal life. The “Big Four” cost-of-delay profiles (aka archetypes) promoted in the Kanban community — Expedite, Fixed-Date, Standard Urgency, Intangible — are usually sufficient for the work that organizations, teams and I personally need to handle. However, lately in my personal kanban, I have seen a new archetype emerge, one that has aspects of both Intangible and Fixed-Date, which I’m calling “Intangible Fixed-Date.”

Basically, the new profile has arisen from the need to handle airfare purchasing, something that I do somewhat often. Since I’m not independently wealthy and I like to do right by my employer when I’m traveling on business, I care about the cost of airfare. Anyone who travels knows that there is some optimal time to buy airfare (even if he or she doesn’t know exactly when that is). I want to be able to research fares just-in-time to get the best deal. However, if I model the task as a simple fixed-date item, I either have to use the true date by which the cost-of-delay is unacceptable (that is, the day I need to be somewhere) or use a fake deadline. I typically added the task to my Fixed-Date lane with its true deadline, then would occasionally check it to see if it “felt” like I should act, but it was too easy to forget about it until past a responsible moment, even if it wasn’t the last responsible moment. Which brings us to the crux of the problem: This type of item has two last responsible moments, the financially last responsible and absolute last.
Let’s review the cost-of-delay curves of the existing profiles:

COD expedite

Expedite

COD standard

Standard Urgency

COD intangible

Intangible

COD fixed date

Fixed Date

According to the reliable Andy Carmichael and David Anderson in their Essential Kanban Condensed, Fixed-Date items “have high impact but only if you miss the deadline. The scheduling imperative here is to make sure you start before the last responsible moment and deliver before the deadline.” Also, you don’t get any economic benefit from completing the work before the deadline. In software delivery, this usually means understanding how long it can take you to complete the work (see your handy delivery-time histogram), then backing up from the deadline from there based on risk tolerance. However, booking airfare doesn’t take much time (I can do it in a few minutes) — so that’s not a concern. But the deadline only represents the absolute last responsible moment, so this curve is insufficient.

If airfare price increases were linear, I could use a standard urgency profile blended with the fixed-date. But some basic research shows that price increases have an inflection point somewhere around the 30-day mark, although the lowest prices may occur earlier:
airfare-prices-graph
CheapAir-2013-Domestic-AirFares2
Other factors matter, too, of course (seasonality, specific locations). But I don’t need to get too deep into modeling that yet — I just need a better solution. That means incorporating the Intangible curve, whose items “have an apparently low urgency,” but that indicates “a rise in urgency – possibly a steep rise – will happen in the future.”
My airfare purchases exhibit aspects of both curves, though neither is sufficient. Unlike a typical intangible item, I actually do know when that future date will be, along with a rough idea of when the “rise in urgency” will happen. And unlike a typical fixed-date item, there is some economic benefit from doing it sooner than the deadline.
COD intangible fixed-date

Intangible Fixed Date

Now that I understand the cost-of-delay curve, I need to be able to handle it in my kanban system. One approach that I’m trialling is to create a separate swim lane — “Intangible Fixed-Date” — and use a second date to signal readiness (the second date being my optimal time to commit).
Here’s how it works: Say I find out on Sep. 6 that have to be in Chicago on Nov. 6. That’s two months away. Given that airfare cost won’t typically rise until about 30 days out, I don’t want to worry about this yet, which is to say, I don’t want this card on my Requested column yet. So I set up a rule in Kanbanize (my work-visualization tool of choice) to keep it in the backlog until 35 days out (the five-day buffer allows me some flexibility), at which time it moves the card from the backlog to Requested.
It has the ultimate deadline — Nov. 6 — displayed on the card. But since it’s in its own swim lane, I have an explicit policy that I begin work on these items as soon as they appear in Requested.

Screen Shot 2017-10-05 at 12.52.01 PM

If I want to really be disciplined, I can then set a service-delivery expectation that sets the bar for how well I handle these (e.g., 90% of Intangible Fixed-Date items will be completed within five days), and analyze my performance at my personal service-delivery review. But now I fear I’m exposing just how geeky I am (if that wasn’t clear already)!

So what’s the takeaway? Well, you might find value in this “new” cost-of-delay profile (if you need to book airfare, or to plan birthdays or anniversaries, which follow a similar curve). But abstracting out a bit, the idea is that it’s helpful to pay attention to — “listen” to — your data, patterns and behavior of work items and be flexible enough to adapt and create new profiles when they emerge. Pursuing incremental, evolutionary change is one of the underlying principles of kanban method; improve using models and experiments is one of its core practices.

Special thanks to Prateek Singh, Josh Arnold and Mike Burrows for their early feedback in the Lean Agile and Beyond Slack community.

 

Advertisements

Service-Delivery Review: The Missing Agile Feedback Loop?

I’ve been working for many years with software-delivery teams and organizations, most of which use the standard agile feedback loops. Though the product demo, team retrospective and automated tests provide valuable awareness of health and fitness, I have seen teams and their stakeholders struggle to find a reliable construct for an important area of feedback: the fitness of their service delivery. I’m increasingly seeing that the service-delivery review provides the forum for this feedback.

What’s the problem?

Software delivery (and knowledge work in general) consists of two components, one obvious — product — and one not so obvious — service delivery.  I’ve often used the restaurant metaphor to describe this: When you dine out, you as the customer care about the food and drink (product) but also how the meal is delivered to you (service delivery). That “customer” standpoint is one dimension of the quality of these components — we might call it an external view. The other is the internal view — that of the restaurant staff. They, too, care about the product and service delivery, but from a different view: Is the food fresh, kept in proper containers, cooked at the right temperatures, and do the staff work well together, complement each other’s skills, treat each other respectfully (allowing for perhaps the occasional angry outburst from the chef, excusable on account of “artist’s temperament”!). So we have essentially two pairs of dimensions: Component (Product and Service Delivery) and Viewpoint (External and Internal).
feedback-quad-chart.001
In software delivery, we have a few feedback loops to answer three of four of these questions and have more-colloquial terminology for that internal-external dimension (“build the thing right” and “build the right thing”):
feedback-quad-chart.002
The problem is that we typically don’t have a dedicated feedback loop for properly understanding how fit for purpose our service-delivery is. And that’s often equally the most vital concern for our customers — sometimes even more important than the fitness of the product, depending on whether that’s the concern of a delivery team or someone else. (One executive sponsor that I worked with noted that he would rather attend a service-delivery review than a demo.) We may touch on things like the team’s velocity in the course of a demo, but we lack a lightweight structure for having a constructive conversation about this customer concern with the customer. (The team may discuss in a retrospective ways to go faster, but without the customer, they can’t have a collaborative discussion about speed and tradeoffs, nor about the customer’s true expectations and needs.)

A Possible Solution

The kanban cadences include something called a Service-Delivery Review. I’ve been incorporating this to help answer teams’ inability to have the conversation around their service-delivery fitness, and it appears to be providing what they need in some contexts.
feedback-quad-chart.003
David Anderson, writing in 2014, described the review as:
Usually a weekly (but not always) focused discussion between a superior and a subordinate about demand, observed system capability and fitness for purpose Comparison of capability against fitness criteria metrics and target conditions, such as lead time SLA with 60 day, 85% on-time target Discussion & agreement on actions to be taken to improve capability
The way that I define it is based on that definition with minor tweaks:
A regular (usually weekly) quantitatively-oriented discussion between a customer and delivery team about the fitness for purpose of its service delivery.
In the review, teams discuss any and all of the following (sometimes using a service-delivery review canvas):
  • Delivery times (aka Cycle/Lead/Time-In-Process) of recently completed work and tail length in delivery-time distribution
  • Blocker-clustering results and possible remediations
  • Risks and mitigations
  • Aging of work-in-progress
  • Work-type mix/distribution (e.g., % allocation to work types)
  • Service-level expectations of each work item type
  • Value demand ratio (ratio of value-added work to failure-demand work)
  • Flow efficiency trend
These are not performance areas that teams typically discuss in existing feedback loops, like retrospectives and demos, but they’re quite powerful and important to having a common understanding of what’s important to most customers — and, in my experience, some of the most unnecessarily painful misunderstandings. Moreover, because they are both quantitative and generally fitness-oriented, they help teams and customers build trust together and proactively manage toward greater fitness.
feedback-quad-chart.004

Service-delivery reviews are relatively easy to do, and in my experience provide a high return on time invested. The prerequisites to having them are to:

  1. Know your services
  2. Discover or establish service-delivery expectations

Janice Linden-Reed very helpfully outlined in her Kanban Cadences presentation the practical aspects of the meeting, including participants, questions to ask and inputs and outputs, which is a fine place to start with the practice.


Afterward #1: In some places I’ve been, so-called “metrics-based retrospectives” have been a sort of precursor to the service-delivery review, as they include a more data-driven approach to team management. Those are a good start but ultimately don’t provide the same benefit as a service-delivery review because they typically don’t include the stakeholder who can properly close the feedback loop — the customer.

Afterward #2: Andy Carmichael encourages organizations to measure agility by fitness for purpose, among other things, rather than practice adoption. The service-delivery review is a feedback loop that explicitly looks at this, and one that I’ve found is filling a gap in what teams and their customers need.


Afterward #3: I should note that you don’t have to be in the business of software delivery to use a service-delivery review. If you, your team, your group or your organization provides a service of any kind (see Kanban Lens and Service-Orientation), you probably want a way to learn about how well you’re delivering that service. I find that the Service-Delivery Review is a useful feedback loop for that purpose.


[Edited June 12, 2017] Afterward #4 (!):  Mike Burrows helpfully and kindly shared his take on the service-delivery review, which he details in his new book, Agendashift: clean conversations, coherent collaboration, continuous transformation:

Service Delivery Review: This meeting provides regular opportunities to step back from the delivery process and evaluate it thoroughly from multiple perspectives, typically:
• The customer – directly, via user research, customer support, and so on
• The organisation – via a departmental manager, say
• The product – from the product manager, for example
• The technical platform – eg from technical support
• The delivery process – eg from the technical lead and/or delivery manager
• The delivery pipeline – eg from the product manager and/or delivery manager

I include more qualitative stuff than you seem to do, reporting on conversations with the helpdesk, summarising user research, etc


Introducing: The NoEstimates Game

I’ve been play-testing a new simulation game that I developed, which I’m calling the NoEstimates Game. Thanks to my friends and colleagues at Universal Music Group, Asynchrony and the Lean-Kanban community (Kanban Leadership Retreat, FTW!), I’ve gotten it to a state in which I feel comfortable releasing it for others to play and hopefully improve.

The objective is to learn through experimentation what and how much different factors influence delivery time.

[Jan. 3, 2017 update: Game materials are now available on GitHub]


Download these materials in order to play:


If you’d like to modify the original game elements, here they are:

I’m releasing it under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, so please feel free to share and modify it, and if possible, let me know how I can improve it.


Kanban protips

Here are a few things I have observed about kanban that I didn’t know when I started:

  1. Although kanban roughly translates to “signal card,” the cards on a team wall are not the signals. The empty spaces are the signals (that you have capacity). The fixed capacity is represented by the spaces/slots, and not the cards themselves.
  2. It’s okay to distinguish cards in a queue as being “done” and not done. Only in a perfect cadence will you not need the concept of (and corresponding way to identify) some items in a queue being “done” and some not.
  3. Many value-stream maps show the customer need at the beginning of the cycle, but it’s useful to visualize the customer need at the end (perhaps as a circle) in order to show how demand is pulled through the system and not pushed.
  4. Cumulative-flow diagrams and run charts (showing cycle time for each story) are the most powerful and helpful metrics to use, and you can implement them by hand as big visible charts. They don’t need to be created in electronic tools.
  5. Be sure you visualize the entire value stream (“concept to cash,” including deployment) somewhere or you will simply optimize on your development cycle.
  6. To visualize a minimal marketable feature moving, create an MMF horizontal lane above the normal story cards or simply make a temporary horizontal lane for that feature.
  7. You don’t have to be “advanced” in agile or any other philosophy/methodology to do kanban. I used to believe this, but David Anderson (et al) have been helpful in conveying the idea that you start by modeling and making visible what you already do and improving from there.

Cumulative-flow diagram can double as retrospective timeline

If you’re using kanban in your environment, you probably use a cumulative-flow diagram. It’s a handy tool to track kanban metrics like cycle time and to quickly see bottlenecks. In addition to all the kanban goodness it gives you, it can also double as a timeline that you can use in your retrospectives.

Whether you use a physical version posted on your kanban board (like my current team does) or an electronic one, you can annotate dates with important events, such as when:

  • A team member joins or leaves
  • An unexpected technical problem surfaces, like a major refactoring or bug
  • The team decides to change something about its kanban, like increase a WIP limit
  • The team makes a positive change, like switching pairs more often

It’s pretty easy to do, especially if you have a physical chart that you update during your standup meeting.

Then, when you have a retrospective, bring the diagram along to help you remember what happened during the period over which you’re retrospecting. If you’re anything like the teams I’ve been on and like me, you have a hard time remembering what happened beyond yesterday, so it’s handy to have a reference. Having this time-based information will help you make more objective decisions about how to improve, since you won’t be guessing so much as to why your cycle time lengthened over the last week, or why you decided to decrease a WIP limit a month ago.


Kanban and linear work

Alan Shalloway is very helpfully documenting some Myths of Kanban. One myth that caught my eye in particular was “Kanban suggests linear work and requires too many handoffs.” I’ll be talking about this aspect of Kanban in my presentation on whole-team approach at the Agile and Beyond conference, so I liked what Alan wrote — as an aside, what he calls linear, I call sequential (contrasted with simultaneous):

Lean manufacturing may assume linear work, but not Lean software development (nor Kanban). Kanban boards may often appear to be linear but Kanban boards reflect the way people are doing the work. Hence, if people are working linearly, the board will reflect that. However, Kanban boards can also reflect non-linear work.  One must also recognize that a Kanban board does not reflect the people doing the work but rather reflects the flow of the work being done. Hence if a board looks like:

Backlog — Analysis — Design — Code — Test — Done

it is not suggesting that the work be done by different people and hand things off. It just shows the states the work progresses through. In this case, it’s on the backlog, in analysis, being designed, coded, tested or completed. It could very well be the same person doing the work.  The second misconception is that the board tells you to break things down into these steps.  Actually, it doesn’t (or shouldn’t). The board should be a reflection of the work being done. So different columns on the board should reflect the different steps the team is doing. If the team swarms on multiple steps, then the Kanban board should only have one column for that.  Essentially the Kanban board has one column for each type of work the team is doing.  The explicit policy for how it gets to that step and out of that step also needs to be stated, but is not necessarily written on the board.  Bottom line – if you don’t like the process that you see on the board, change it (and then update the board).  The board is there merely to reflect on your work so you can better change how you work.

I think Alan is right when he says, essentially, that the board is “identity-neutral” as to who is doing the work. The problem that I have seen is breaking out the queues in such a way as to encourage non-simultaneous work behavior (e.g., separating code and test). This is why I still prefer fewer queues to many.


FLAVE means more than just “visually outstanding”

Some acronyms are actually useful for remembering models or concepts. Lately, it seems that teams could use some help remembering some of the core principles of Kanban, so I’ll offer up my own acronym to help myself as much as anyone else: FLAVE.

The software-development world needs another acronym like it needs another methodology. Yet, if you’re anything like me — someone who loathes acronyms, by the way — you find that some acronyms are actually useful for remembering models or concepts (for instance, it’s easy to recall all of the elements of Bill Wake’s INVEST when creating stories).

Lately, it seems that teams could use some help remembering some of the core principles of Kanban, so I’ll offer up my own acronym to help myself as much as anyone else (definitions taken from David Anderson):

  • F | Measure and manage Flow: Track work items to see if they are proceeding at a steady, even pace.

  • L | Limit work-in-progress: Set agreed-upon limits to how many work items are in progress at a time.

  • A | Adapt the process: Adapt the process using ideas from Systems Thinking, W.E. Deming, etc.

  • V | Visualize the workflow: Represent the work items and the workflow on a card wall or electronic board.

  • E | Make process policies Explicit: Agree upon and post policies about how work will be handled.

The concepts aren’t in any particular order, except to create the acronym word. And don’t tell me that FLAVE isn’t a wordThis guy just doesn’t know how to spell it.