I can’t tell you how much of a difference having a policy of inbox zero — that is, at some point during my day, having no email in my inbox— has made. Before Merlin Mann’s thorough talk inspired me to set and live by a inbox-zero policy, I used to worry about those unresolved messages: Had I forgotten something? Was someone waiting for a response? Sometimes I had indeed forgotten, or someone was waiting — in which case I caused dissatisfaction around me. However, many times, no one was waiting, but still my brain, not being certain of that, couldn’t let it go, and I had to deal with unresolved threads spinning in my mind, causing unnecessary cognitive load.
Turn Off Email
What percentage of your day is email open? Why is that? Perhaps you have a good reason, but you might try removing desktop and phone notifications and checking to see if you’re more productive.
Email should scale with your role/position/level
Five sentences (or less)
- You’re attaching something: Before you drag that doc into the email body, consider putting it into a shared team space (e.g., Jive, wiki, Google Docs) and simply including the link to it. In addition to simple hygiene of facilitating versioning and collaboration, you’re also helping people to avoid using email as a document repository. Don’t force people to keep your email.
- You’re writing more than five sentences: If you can’t say it in five sentences or less, chances are that you need better fidelity communication; try a video or phone call. If you need something to serve as documentation that needs to live beyond a phone call, write the document and share the link.
- You’re writing only one sentence (or word): Simple yes/no responses can be better handled in other ways, such as instant messaging.
- You’re writing something that will require group discussion and/or decisions: We have other, better ways of group discussions, namely video calls or quick in-person meetings. If it’s a simple decision or set of preferences, create a survey and send the link. And please — whatever you do — don’t use email to try to find out “what time works” for each person in the group. It’s not 1999 anymore!
After Manchester City scored seven goals in their Oct. 14 match against Stoke City, my first reaction was: Wow, they’re playing some beautiful, unselfish soccer. Being also a baseball fan, my second reaction was: That’s a load of goals — how many runs would that equate to in baseball?
To find out, I used the same technique that we can use for understanding the performance and predictability of our knowledge-work systems, such as software delivery.
From this we can then start to understand the likelihood of a seven-goal outburst by a single team. For instance, with 246 occurrences in a total of 760 total outcomes, the goal total of one is the most likely, at 32.4% Seven goals happened only once last year, making it 0.1% likely.
(That 23-run game was when the Washington Nationals beat the Mets by a landslide on Apr. 30.)
To compare these outliers, we could use something like an average with standard deviations away from that. But the data from both the EPL and MLB are not normally distributed, which renders that approach inappropriate. Instead, we’ll use percentiles. Why? As Dan Vacanti writes in When Will It Be Done?:
Percentiles are not skewed by outliers. One of the great disadvantages of a mean and standard deviation approach (other than the false assumption of normally distributed data) is that both of those statistics are heavily influenced by outliers.
- In 60% of MLB and EPL games, a team scores six or fewer runs and one or fewer goals, respectively.
- Seven or eight runs (or fewer) in baseball occurs at about the frequency as two (or fewer) goals in soccer.
So the next time someone asks you about the likelihood of your favorite sports team — whatever the sport — scoring a certain number, you’ll know what to do — just as you will in your own team when someone asks when to expect a single piece of work to be finished.
Special thanks to Dan Vacanti for the insights from his recent book, When Will It Be Done?
- The “Big Four” cost of delay profiles (a.k.a. archetypes) — Expedite, Fixed-Date, Standard Urgency, Intangible — are usually sufficient.
- In my personal kanban, I have seen a new archetype emerge, one that has aspects of both Intangible and Fixed-Date, which I’m calling “Intangible-Fixed-Date.”
- This profile is less for the purpose of selection and more for scheduling.
- So far, the main application of “Intangible-Fixed-Date” for me is for buying airfare, whose cost-of-delay curve fits none of the “Big Four” curves.
- I use a couple of features in Kanbanize to deal with this new profile slightly differently from the other profiles, namely by creating a separate swim lane and using two dates (rather than one).
- Takeaway: “Listen” to your data, patterns and behavior of work items and be flexible enough to adapt and create new profiles when they emerge.
I routinely use cost-of-delay to assist in scheduling, selecting and sequencing work, both in professional settings and in my own personal life. The “Big Four” cost-of-delay profiles (aka archetypes) promoted in the Kanban community — Expedite, Fixed-Date, Standard Urgency, Intangible — are usually sufficient for the work that organizations, teams and I personally need to handle. However, lately in my personal kanban, I have seen a new archetype emerge, one that has aspects of both Intangible and Fixed-Date, which I’m calling “Intangible Fixed-Date.”
If I want to really be disciplined, I can then set a service-delivery expectation that sets the bar for how well I handle these (e.g., 90% of Intangible Fixed-Date items will be completed within five days), and analyze my performance at my personal service-delivery review. But now I fear I’m exposing just how geeky I am (if that wasn’t clear already)!
So what’s the takeaway? Well, you might find value in this “new” cost-of-delay profile (if you need to book airfare, or to plan birthdays or anniversaries, which follow a similar curve). But abstracting out a bit, the idea is that it’s helpful to pay attention to — “listen” to — your data, patterns and behavior of work items and be flexible enough to adapt and create new profiles when they emerge. Pursuing incremental, evolutionary change is one of the underlying principles of kanban method; improve using models and experiments is one of its core practices.
Special thanks to Prateek Singh, Josh Arnold and Mike Burrows for their early feedback in the Lean Agile and Beyond Slack community.
What’s the problem?
A Possible Solution
Usually a weekly (but not always) focused discussion between a superior and a subordinate about demand, observed system capability and fitness for purpose Comparison of capability against fitness criteria metrics and target conditions, such as lead time SLA with 60 day, 85% on-time target Discussion & agreement on actions to be taken to improve capability
A regular (usually weekly) quantitatively-oriented discussion between a customer and delivery team about the fitness for purpose of its service delivery.
- Delivery times (aka Cycle/Lead/Time-In-Process) of recently completed work and tail length in delivery-time distribution
- Blocker-clustering results and possible remediations
- Risks and mitigations
- Aging of work-in-progress
- Work-type mix/distribution (e.g., % allocation to work types)
- Service-level expectations of each work item type
- Value demand ratio (ratio of value-added work to failure-demand work)
- Flow efficiency trend
Service-delivery reviews are relatively easy to do, and in my experience provide a high return on time invested. The prerequisites to having them are to:
- Know your services
- Discover or establish service-delivery expectations
Janice Linden-Reed very helpfully outlined in her Kanban Cadences presentation the practical aspects of the meeting, including participants, questions to ask and inputs and outputs, which is a fine place to start with the practice.
Afterward #2: Andy Carmichael encourages organizations to measure agility by fitness for purpose, among other things, rather than practice adoption. The service-delivery review is a feedback loop that explicitly looks at this, and one that I’ve found is filling a gap in what teams and their customers need.
Afterward #3: I should note that you don’t have to be in the business of software delivery to use a service-delivery review. If you, your team, your group or your organization provides a service of any kind (see Kanban Lens and Service-Orientation), you probably want a way to learn about how well you’re delivering that service. I find that the Service-Delivery Review is a useful feedback loop for that purpose.
Service Delivery Review: This meeting provides regular opportunities to step back from the delivery process and evaluate it thoroughly from multiple perspectives, typically:
• The customer – directly, via user research, customer support, and so on
• The organisation – via a departmental manager, say
• The product – from the product manager, for example
• The technical platform – eg from technical support
• The delivery process – eg from the technical lead and/or delivery manager
• The delivery pipeline – eg from the product manager and/or delivery manager
I include more qualitative stuff than you seem to do, reporting on conversations with the helpdesk, summarising user research, etc
When a WWT colleague invited me to the Tottenham – CSKA Moscow Champions League match Wednesday, he treated me to more than simply world-class soccer — he unwittingly gave me a chance to see Theory of Constraints in action: after the match, while we waited for a Tube ride.
- Identify the constraint
- Optimize the constraint
- Subordinate everything to the constraint
- Add supply to the constraint
- Goto 1
- always busy (Londoners are particularly adept at squeezing as many people as can fit into the car, so this is never a problem!), and
- is only ever doing “value-adding” work, which in this case is moving passengers forward (as opposed to failure demand, which doesn’t typically happen in the Tube in the form of redoing work — going backward — but does often take the form of people getting their bags or themselves stuck in the doors).
- Throughput (should go up)
- Operational Expense (should go down)
- Inventory/WIP (should go down)
[Note: Lately, I’ve been talking a lot about fitness for purpose and fitness criteria. Other than David Anderson and a few others, though, not much material exists — at least not applied in the software-delivery space — to point people to for further reading. So I’m jotting down some ideas here in the hopes of furthering the discussion and understanding.]
- The first step in improving is understanding what makes the service you provide fit for its purpose.
- Fitness is always defined externally, typically by the customer
- Fitness for purpose has two components: a product component and a service-delivery component
- Fitness criteria are metrics that enable us to evaluate whether our service delivery and/or product is fit for purpose
- Of the two major categories of metrics, fitness criteria are primary, whereas health or improvement metrics are derivative
- Examples of service delivery fitness criteria are delivery time, throughput and predictability
Fitness for purpose is an evaluation of how well a product or service fulfills a customer’s desires based on the organization’s goals or reason for existence. In short, it is the ability of an organization or team to fulfill its mission. The notion derives from manufacturing industry that purportedly assesses a product against its stated purpose. The purpose may be that as determined by the manufacturer or, according to marketing departments, a purpose determined by the needs of customers. David Anderson emphasizes that
Fitness is always defined externally. It is customers and other stakeholders such as governments or regulatory authorities that define what fitness means.
Fitness criteria then are metrics that enable us to evaluate whether our product, service or service delivery is “fit for purpose” in the eyes of a customer from a given market segment. As Anderson notes, fitness criteria metrics are effectively the Key Performance Indicators (KPIs) for each market segment, and as such are direct metrics.
As Anderson explains,
Every business or every unit of a business should know and understand its purpose … What exactly are they in business to do? And it isn’t simply to make money. If they simply wanted to make money they’d be investors and not business owners. They would spend their time managing investment portfolios and not leading a small tribe of believers who want to make something or serve someone. So why does the firm or business unit exist? If we know that we can start to explore what represents “fitness for purpose.”
For me, fitness is something that, like user stories, can be understood at varying levels of granularity. Organizations have fitness for their purpose — “are we fit to pursue this line of business?” — and teams (in particular, small software-delivery teams) also have fitness for their purpose — “are we fit to delivery this work in the way the customer expects?”
Therefore, the first step in improving is understanding what makes the service you provide fit for its purpose. Fitness for purpose is simply an evaluation of how well an organization or team delivers what it is in the business of (its purpose). Modern knowledge-worker organizations like Asynchrony often focus on concerns like product development or technical practices, sometimes overlooking service-delivery excellence. But service delivery is a major reason why our customers choose us. That’s why we attempt to understand and define each project team’s purpose and fitness for that purpose at the project kickoff in a conversation with our customer representatives.
Two Components of Fitness
Fitness for purpose has two components: a product component and a service-delivery component. That is, the customer for your delivery team considers the product that you are building (the what) — did you build the right thing? — as well as the way in which you deliver it (the how) — how reliable were you when you said you’d deliver it? How long did it take you to deliver it? We have useful feedback mechanisms for learning about the fitness of the products we build (e.g., demos/showcases, usage analytics), but how do we learn about the fitness of our service delivery? That’s the service-delivery review feedback loop, which I will write about later.
Fitness criteria are metrics which enable us to evaluate whether our service delivery is “fit for purpose” in the eyes of a customer from a given market segment. These are usually related to but not limited to delivery time (end to end duration), predictability and, for certain domains, safety or regulatory concerns. When we explore and establish expectation levels for each criteria, we discover fitness-criteria thresholds. They represent the “good enough” or the point where performance is satisfactory. For example, our customer may expect us to deliver user stories within some reasonable time frame, so we could say that for user stories, our delivery-time expectation is that 85% of the time we complete them within 10 days. We might have a different expectation for urgent changes, like production bug fixes.
Fitness criteria categories are often common — nearly everyone cares about delivery time and predictability, for instance — the actual thresholds for them are not. While some are shared by many customers, the difference in what people want and expect allow us to define market segments and understand different business risks. Fitness criteria should be our Key Performance Indicators (KPIs), and teams should use those thresholds to drive improvements and evolutionary change.
Who Defines Fitness?
As opposed to team-health metrics, like happiness or pair switches, fitness and fitness criteria are always defined externally: Customers and other stakeholders define what fitness means. That means you cannot ask the delivery team to define its fitness. They cannot know because they are not the ones buying their service or product. We should be asking customers “What would make you choose this service? What would make you come back again? What would encourage you to recommend it to others?”
These are a team’s fitness criteria and these are the criteria by which Asynchrony should be measuring the effectiveness of our teams’ service delivery. Then we’ll be improving toward the goal, the greater fitness for our purpose, both as an organization and as individual delivery teams. By integrating fitness-for-purpose thinking into everything we do, we will create an evolutionary capability that will help us sense changes in market needs and wants and what those different market segments value. As a result, Asynchrony will continue to thrive and survive in the midst of our growth and growing market complexity.
Difference Between Fitness Metrics and Health Metrics
|Fitness Metric||Health Metric|
|Metric that enables us to evaluate whether our product, service or service delivery is “fit for purpose” in the eyes of a customer from a given market segment. Effectively comprise the Key Performance Indicators (KPIs) for each market segment.||Metric that guides an improvement initiative or indicates the general health of your business, business or product unit or service delivery capability.|
|Examples: delivery time, functional quality, predictability, net fitness score||Examples: flow efficiency,velocity, percent complete and accurate,WIP|
|Customer-oriented and derived||Team-oriented and derived|
A Food Example
I like to use food for examples (also to eat). Is a restaurant in the product or service-delivery business? That’s a trick question, of course: The answer is “both.” As a customer, you care about the meal (product) but also about the way you have it provided (service delivery). And those always vary depending on what you want: If you want cheap and fast, like a burger and fries at McDonald’s, you may have a lower expectation for the product (sorry, Ronald) but a higher one for delivery speed. Conversely, if you’re out for fine dining, you expect the food to be of a higher quality and are willing to tolerate a longer delivery time. However, you have some thresholds of service even for four-star restaurants: For example, if you have a reservation, you expect to be seated within minutes of your arrival. And you expect a server to take your order in a timely way. If you don’t have a reservation, the maitre d’ or hostess will perhaps quote you an expected wait time; if it’s unacceptable, you’ll go elsewhere. If it’s acceptable but they don’t seat you in that time, you are dissatisfied. The service delivery was not fit for its purpose, which is to say the reason why you chose to eat there.
A Software-Delivery Example
The restaurant experience is actually not too dissimilar from software delivery. The customer expects software (product) but also expects it on certain terms or within certain thresholds (service delivery). A team works hard to deliver the right features and demonstrates them at some frequency; at the demo, the team likely will explicitly ask “is this what you wanted?” What’s often missing is the “are these the terms on which you wanted it?” Whether in the demo or a separate meeting, we need to also review service delivery. This is where we look at whether our service meets expectations: Did we deliver enough? Reliably enough? Respond to urgent needs quickly enough? The good news is that we can quantitatively manage the answers to these questions. Using delivery times, we can assess whether the throughput is within a tolerance. One team used a probabilistic forecast and found that their throughput was not likely to help them reach their deadline in time. Conversely, another realized that they were delivering too fast and could stand to reallocate people to other efforts. Also, for instance, when we set up delivery-time expectations (some people call these SLAs), like delivering standard-urgency work at a 10-day, 85% target, we can then make decisions based on data rather than feelings or intuition (which have their place in some decisions but not others). These expectations needn’t be perfect or “right” to begin; set them and begin reviewing them to see if they are satisfactory.
Having an explicit review of fitness criteria, especially for service-delivery fitness, is a vital feedback loop for improving. Rather than having the customer walk away dissatisfied for some unknown reason, we can proactively ask and manage those expectations and improve upon them. Often these are the unstated criteria that ultimately define the relationship and create (or erode) trust; discover them and quantitatively manage them.
Among the many exciting things happening at Asynchrony this year, one of my favorites is our first-ever internal conference, coming July 15. I’m a big fan of organizations that take time to learn and share their learning. Especially given that Asynchrony is growing and establishing new offices, it’s vital that we share learning across offices and invest in the personal relationships that make the organization what it is. The conference goals are:
- Increase the value of the time invested by targeting information sharing.
- Increase knowledge sharing and interactions between individuals and teams.
- Provide opportunities for our employees to create and present a session for their colleagues.
The conference will be a mix of 50-minute sessions, an exhibit floor with 15-20 booths for delivery teams and functional groups (aka chapters and guilds) and open space. To fill the sessions, we made an open call for proposals in the organization, with a small selection team to decide which ones ultimately made the cut based on:
- Good variety of information presented
- Relevance to our current and future business success
- Interest from the company in the presentation content (popular vote/survey)
- Enough mix of technical and non-technical topics so there will be multiple sessions that non-technical people can attend and get value (this means that non-technical topics are probably more likely to be selected!)
- Highlighting employees who have not already been featured in front of the company (expecting there to be a mix of both)
- Promoting creativity of topic and presentation content/activities
We had around 40 people propose more than 50 sessions. The selected sessions are intriguing — something for everyone, and certainly a conference I’m looking forward to attending!
- Anarchism at Asynchrony: Lessons from the Left in Building Self-Organized Teams (Brian Coalson)
- Asynchrony Culture and You! (Andrew Rauscher and Wes Ehrlichman)
- Battling Unconscious Bias (Neem Serra)
- Building a serverless backend on AWS (Eric Neunaber)
- Denver: Self Management and our Future (Jim Mruzik and Don Peters)
- DevOps Culture (Matt Perry)
- Getting to know Node.Js (Josh Hollandsworth)
- Go (Jason Riley)
- Improving Communication Skills with Analogies and Metaphors (Rose Hemlock and J LeBlanc)
- Intro to Unity 3d (Westin Breger)
- Introduction to Functional Programming (Kartik Patel)
- Mobile Monsters – Develop Your Mobile App Test and Quality Strategy (Linda Sorrels and Mary Jo Mueller)
- Password Hashing and Cracking (Micah Hainline)
- Plan Bee – Using The Raspberry Pi to Help Bees (Dave Guidos)
- Risk analysis and RFC 1149 (Alison Hawke)
- Scaling Staffing at Asynchrony (Nate McKie)
- The Meaning of Dub Dub: Where Apple is taking us in 2016 and beyond (Nick McConnell, Mark Sands, James Rantanen, Jon Hall, Henry Glendening)
- UX Process (Lee Essner)
- Who Matters and What Matters To Them (David Lowe)