Kanban’s First Change-Management Principle and Chesterton’s Gate

At a recent Lean Kanban St. Louis meetup, I shared that, while the Manifesto for Agile Software Development has been amazingly enduring, it was silent on the issue of change management, which, in my experience, is the area that most commonly inhibits the ability for the Manifesto’s values and principles from taking root.

This is why I appreciate that the Kanban Method explicitly addresses change-management, and in particular, sets the tone with its first change-management principle:

Start with what you do now, understanding current processes, as actually practiced, and respecting existing roles, responsibilities and job titles.

In their book Essential Kanban Condensed, David Anderson and Andy Carmichael explain it this way:

…the current processes, along with their obvious deficiencies, contain wisdom and resilience that even those working with them may not fully appreciate.

This is the challenge that people brought into organizations as “change agents” or “agile coaches” face — I know, because I’ve been one and, to my and my client’s disservice, not heeded this advice. And in fairness, it’s difficult: The ambit — get results, immediately — is often at cross-purposes with this principle.

It reminds me of an earlier bit of wisdom from the writer G.K. Chesterton:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

This metaphor has become known as “Chesterton’s Fence.” To be sure, it takes time to understand the reason for the fence, and, to be sure, in many organizations, time has obviated the need for the fence. But lest we strip away something that provides resilience, or simply create ill will by disrespecting another’s work, we’ll do well to first understand the reason for the fences — “the more intelligent type of reformer.”


Service-Delivery Review: The Missing Agile Feedback Loop?

I’ve been working for many years with software-delivery teams and organizations, most of which use the standard agile feedback loops. Though the product demo, team retrospective and automated tests provide valuable awareness of health and fitness, I have seen teams and their stakeholders struggle to find a reliable construct for an important area of feedback: the fitness of their service delivery. I’m increasingly seeing that the service-delivery review provides the forum for this feedback.

What’s the problem?

Software delivery (and knowledge work in general) consists of two components, one obvious — product — and one not so obvious — service delivery.  I’ve often used the restaurant metaphor to describe this: When you dine out, you as the customer care about the food and drink (product) but also how the meal is delivered to you (service delivery). That “customer” standpoint is one dimension of the quality of these components — we might call it an external view. The other is the internal view — that of the restaurant staff. They, too, care about the product and service delivery, but from a different view: Is the food fresh, kept in proper containers, cooked at the right temperatures, and do the staff work well together, complement each other’s skills, treat each other respectfully (allowing for perhaps the occasional angry outburst from the chef, excusable on account of “artist’s temperament”!). So we have essentially two pairs of dimensions: Component (Product and Service Delivery) and Viewpoint (External and Internal).
In software delivery, we have a few feedback loops to answer three of four of these questions and have more-colloquial terminology for that internal-external dimension (“build the thing right” and “build the right thing”):
The problem is that we typically don’t have a dedicated feedback loop for properly understanding how fit for purpose our service-delivery is. And that’s often equally the most vital concern for our customers — sometimes even more important than the fitness of the product, depending on whether that’s the concern of a delivery team or someone else. (One executive sponsor that I worked with noted that he would rather attend a service-delivery review than a demo.) We may touch on things like the team’s velocity in the course of a demo, but we lack a lightweight structure for having a constructive conversation about this customer concern with the customer. (The team may discuss in a retrospective ways to go faster, but without the customer, they can’t have a collaborative discussion about speed and tradeoffs, nor about the customer’s true expectations and needs.)

A Possible Solution

The kanban cadences include something called a Service-Delivery Review. I’ve been incorporating this to help answer teams’ inability to have the conversation around their service-delivery fitness, and it appears to be providing what they need in some contexts.
David Anderson, writing in 2014, described the review as:
Usually a weekly (but not always) focused discussion between a superior and a subordinate about demand, observed system capability and fitness for purpose Comparison of capability against fitness criteria metrics and target conditions, such as lead time SLA with 60 day, 85% on-time target Discussion & agreement on actions to be taken to improve capability
The way that I define it is based on that definition with minor tweaks:
A regular (usually weekly) quantitatively-oriented discussion between a customer and delivery team about the fitness for purpose of its service delivery.
In the review, teams discuss any and all of the following (sometimes using a service-delivery review canvas):
  • Delivery times (aka Cycle/Lead/Time-In-Process) of recently completed work and tail length in delivery-time distribution
  • Blocker-clustering results and possible remediations
  • Risks and mitigations
  • Aging of work-in-progress
  • Work-type mix/distribution (e.g., % allocation to work types)
  • Service-level expectations of each work item type
  • Value demand ratio (ratio of value-added work to failure-demand work)
  • Flow efficiency trend
These are not performance areas that teams typically discuss in existing feedback loops, like retrospectives and demos, but they’re quite powerful and important to having a common understanding of what’s important to most customers — and, in my experience, some of the most unnecessarily painful misunderstandings. Moreover, because they are both quantitative and generally fitness-oriented, they help teams and customers build trust together and proactively manage toward greater fitness.

Service-delivery reviews are relatively easy to do, and in my experience provide a high return on time invested. The prerequisites to having them are to:

  1. Know your services
  2. Discover or establish service-delivery expectations

Janice Linden-Reed very helpfully outlined in her Kanban Cadences presentation the practical aspects of the meeting, including participants, questions to ask and inputs and outputs, which is a fine place to start with the practice.

Afterward #1: In some places I’ve been, so-called “metrics-based retrospectives” have been a sort of precursor to the service-delivery review, as they include a more data-driven approach to team management. Those are a good start but ultimately don’t provide the same benefit as a service-delivery review because they typically don’t include the stakeholder who can properly close the feedback loop — the customer.

Afterward #2: Andy Carmichael encourages organizations to measure agility by fitness for purpose, among other things, rather than practice adoption. The service-delivery review is a feedback loop that explicitly looks at this, and one that I’ve found is filling a gap in what teams and their customers need.

Afterward #3: I should note that you don’t have to be in the business of software delivery to use a service-delivery review. If you, your team, your group or your organization provides a service of any kind (see Kanban Lens and Service-Orientation), you probably want a way to learn about how well you’re delivering that service. I find that the Service-Delivery Review is a useful feedback loop for that purpose.

[Edited June 12, 2017] Afterward #4 (!):  Mike Burrows helpfully and kindly shared his take on the service-delivery review, which he details in his new book, Agendashift: clean conversations, coherent collaboration, continuous transformation:

Service Delivery Review: This meeting provides regular opportunities to step back from the delivery process and evaluate it thoroughly from multiple perspectives, typically:
• The customer – directly, via user research, customer support, and so on
• The organisation – via a departmental manager, say
• The product – from the product manager, for example
• The technical platform – eg from technical support
• The delivery process – eg from the technical lead and/or delivery manager
• The delivery pipeline – eg from the product manager and/or delivery manager

I include more qualitative stuff than you seem to do, reporting on conversations with the helpdesk, summarising user research, etc

Book Review: Actionable Agile Metrics for Predictability

Screen Shot 2015-04-29 at 6.46.56 PMDaniel Vacanti’s new book, Actionable Agile Metrics for Predictability, is a welcome addition to the growing canon of thoughtful, experience-based writing on how to improve service delivery. It joins David Anderson’s (Kanban: Successful Evolutionary Change for Your Technology Business) and Mike Burrows’s (Kanban from the Inside) books in my list of must-reads on the kanban method, complementing those works with deeper insight into how to use metrics to improve flow.

Daniel’s message about orienting metrics to promote predictable delivery and flow — which he defines as “the movement and delivery of customer value through a process” — is primarily grounded in his experience helping Siemens HS. He includes the case study (which has been published previously and is valuable reading in itself) at the end of the book, so he keeps the rest of the book free from too many customer references, though he’s drawing on the pragmatic experience.

As someone who for several years has been helping teams and organizations improve using the metrics Daniel talks about, I learned a tremendous amount. One of the reasons is that Daniel is particularly keen to clarify language, which I appreciate not only as a former English major (nor as a pedant!), but because it helps us carefully communicate these ideas to teams and management, some of whom may be using these metrics in suboptimal ways or, worse, perverting them so as to give them a bad name and undermine their value. Some examples: The nuanced difference between control charts and scatterplots and clear definitions on Little’s Law (and violations thereof), especially as related to projections and cumulative flow diagrams. I certainly gained a lot of new ideas, and Daniel’s explanations are so thorough that I suspect even novice coaches, managers, team leaders and team members won’t be overwhelmed.

As for weaknesses, I felt that the chapter on the Monte Carlo method lacked the same kind of depth as the other chapters. And I came away wishing that Daniel had included some diagrams showing projections using percentiles from scatterplot data. But those are minor plaints for a book that constantly had me jotting notes in my “things to try” list.

Overall, I loved how Daniel pulled together (no pun intended), for the purpose of flow, several metrics and tools that have often been independently implemented and used and whose purpose— in my experience — was not completely understood. The book unifies these and helps the reader see the bigger picture of why to use them in a way I had not seen before. If you’re interested in putting concepts and tools like Little’s Law, cumulative flow diagrams, delivery-time scatterplots and pull policies into action, this book is for you.

Other observations:

  • The book has a very helpful and clarifying discussion of classes of service, namely the difference between using CoS to commit to work (useful) and using it to prioritize committed work (hazardous for predictability).
  • It also had a particularly strong treatment of cumulative flow diagrams.
  • Daniel does a lot of myth debunking, which I appreciate. Examples: work items need to be of the same size, kanban doesn’t have commitments.
  • The tone is firm and confident — you definitely know where Daniel stands on any issue — without being strident.

What is Autonomy Support?

I wrote a few weeks ago about the advocacy program, our distributed peer-to-peer continuous-improvement program. One of the important components of the program is autonomy support. But what is that? As Daniel Pink notes in his book Drive:

Researchers found greater job satisfaction among employees whose bosses offered “autonomy support.” These bosses saw issues from the employee’s point of view, gave meaningful feedback and information, provided ample choice over what to do and how to do it, and encouraged employees to take on new projects.

In the advocacy program, autonomy-support meetings are an optional opportunity for employees to meet with executive management to give feedback on how the executive leaders can help the employee realize career goals in the organization. The meeting can be scheduled by the employee’s advocate, who also can be part of the meeting, acting as an intermediary or ambassador for the employee to the manager(s). Multiple managers may be part of the meeting, depending on which ones the advocate and employee feel are vital and able to help.

The dynamic should be one in which the traditional organizational structure is flipped upside-down:

Autonomy-support meetings

Therefore, rather than the traditional dynamic of the employee “working for” the manager, in the autonomy-support meeting the servant-leader — in this case, the role of executive leader — should have the mindset of “working for” the employee.

A good starting point for the discussion is the “autonomy-support feedback for executive leaders” section of the employee’s review. Basically, it’s whatever the employee needs executive leaders to do in order to do his or her job better or reach goals. This might be a request for a different project or role switch, more time to explore a particular skill or technology or simply clearer vision or expectations set. Premised on the executive leader’s commitments to the employee, the employee has the right to ask for the executive leader for support in various career-development goals, including timelines for when those things would occur.

Questions that the executive leader might want to ask:

  • How can I help you realize your goals in the next year?
  • By when would you like me to achieve these things for you?
  • In what areas have I failed to help you in the past, and how can I improve?
  • What kind of things would help you feel more engaged?
  • How can I help smooth your path toward mastery of certain skills?
  • What does success look like for you, and how can I help you succeed?

The peer-to-peer feedback-improvement cycle in action

I recently posted about the new peer-to-peer continuous-improvement program that we’re doing at Asynchrony. One of the key aspects of the program is its emphasis on feedback. Direct feedback is often difficult to deal with because people aren’t accustomed to it (especially if it’s critical), so we encourage face-to-face discussions in safe environments. Here’s a real conversation (names changed) that occurred over instant messaging between an employee (“Will”) and his advocate (“Jessica”), posted with their permission. Will has just provided his advocate with his self-review (a one-page overview of recent accomplishments, future goals and ways to improve) and has been collecting feedback from some of his personal stakeholders, which he has shared with Jessica.


What did you think of my self-review?


I think it was very in depth and really liked the explanation of why you did what you did but Pete raised a good question as to how you like to feel like the smartest person in the room… I think we should maybe address that in goals for 2014… Please advise


That’s something I’d like us to talk through. I think its good feedback but its also foreign to me. It feels like telling a razor to be less sharp so the other razors don’t feel bad. I was brought up and live in an environment where all the razors are helping each other be as sharp as they can.


We definitely need to work on being humble then… lol


I’m GREAT at being humble. 🙂


There is another saying, if you are the smartest person in the room… Maybe you should exit stage left…


I never claim to be the smartest person in the room.


Sometimes we don’t have have to say things to make that impression.


Yep. This is something we should talk about over a beer. Are you free after work today or some day this week?


Tomorrow night?


Sounds good

The two peers invited the colleague who provided feedback (“Pete”) to chat in person, and they had a very productive discussion to clarify the feedback and collaborate on ways to improve.

This is how we designed the advocacy program to work: People obtain feedback, the advocate holds him or her accountable and the two discuss improvement face-to-face. You’ll notice that the advocate doesn’t possess any specialized skills in career development or domain-specific expertise; she merely acts to reinforce the accountability loop by deftly supporting Will and at the same time not simply coddling him in the name of “advocacy.” Politely asking a colleague — especially one who has given you explicit permission — how he or she plans to address important issues is something everyone can do.

All three people in the scenario above were honest and respectful with each other: the employee honestly wants to improve, the third person has given honest, respectful feedback and the advocate honestly and respectfully responds when the person for whom he is advocating. Contrast this with the less-than-respectful way that anonymous peer reviews can come off and the sometimes-perverse incentives that people have for being interested in feedback in the first place (as a means to the end of making a case for a raise), and you see that this is a very different dynamic from how many organizations operate.

An agile approach to the traditional performance review

How do you fix a broken legacy performance review system that gives people feedback only once a year and is directly tied to compensation? That is a challenge that we at Asynchrony undertook recently with the guidance of modern social science and — as you might expect from an organization infused with agile thinking — with agile principles.
First, the situation: We’ve tried to make the best of a traditional annual review system over the years, but we’ve come to the end of our rope. We realize it’s not working. That’s because of these dysfunctions:
  • Feedback is delayed (it “officially” happens only annually).
  • Feedback is impersonal (given anonymously through an electronic tool) and provided by people who may not have worked with you for months.
  • Reviews are given by managers who are forced to infer from and translate written feedback from others out of context.
  • The compensation adjustment is tied to the review, so the review is more of a leverage tool and not necessarily honest assessment of improvement needs and goals.
In short, it was a fairly traditional performance-review system, common to many organizations.

“It’s not about the review”

Reviews shouldn’t be the end; they should be the means to the end of improvement. Two structural aspects of the traditional, compensation-dependent reviews precluded real improvement: First, the length of the feedback loop — a year — is much too long for useful, actionable feedback. Second, tying compensation to the review by almost by definition prioritizes the review over improvement. And it encourages people to view their work as a means to the end of the external reward of compensation over the intrinsic reward to mastery the role improvement plays in it.
Most critically, as modern social science tells us, the linkage between compensation and performance creates an unhealthy crowding out of people’s intrinsic desire to improve. As Daniel Pink points out in Drive: The Surprising Truth About What Motivates Us, describing an experiment,
Adding a monetary incentive didn’t lead to more of the desired behavior. It led to less. The reason: It tainted an altruistic act and “crowded out” the intrinsic desire to do something good.
So we understood the problem — but how would we solve it?

How we approached it

We decided to approach the problem with a safe-to-fail experiment (as a limited-participation, voluntary pilot) focused on:
  • Continuous, personal feedback (rather than once annually)
  • Improvement as a means toward mastery and for its own sake (rather than a raise)
  • Peer relationships (rather than hierarchical)
We spent some time designing a simple but profound  alternative program that we could run alongside our legacy program and then invited volunteers to participate. We called the new thing the Advocacy Program. In order to baseline the current state and validate our widespread anecdotal belief about our colleagues’ dissatisfaction with the review and feedback process, we surveyed the company with a single Net-Promoter-style question: How satisfied are you with our current process? The results? Let’s just say that a company whose product registered such an NPS score would be out of business. The good news was that we had lots of room to improve!

How we designed it

matt-graphic-2Inspired by agile principles like building projects around motivated individuals, face-to-face communication, simplicity, self-organization and reflective improvement, as well as recent research on motivation, we designed the program on the strength of peer relationships and the decoupling of compensation from review and improvement. Our professional-development team had previously created a kind of manifesto to guide our thinking, valuing*:
  • Intrinsic motivation over extrinsic motivation
  • Autonomous career management over prescriptive management
  • Individual identity over role identity
  • Lattice-shaped career paths over a ladder-shaped path
The result was an ongoing peer-to-peer relationship in which an employee guides a colleague in career options and feedback and advocates for career growth. Rather than Human Resources surveying various teammates of an employee from the past year, the employee would be responsible for gathering and incorporating feedback on an ongoing basis. Rather than HR scheduling a review with the employee’s manager, the employee would have the option to have his or her advocate schedule an autonomy-support meeting with relevant executive leaders.
Some rules:
  • You can advocate for only one person at a time, and you can’t have a reciprocal advocating relationship: This keeps the burden of advocating manageable for any one person and at the same time creates “chains” of people advocating for each other across the company (e.g., Mary advocates for John, who advocates for Luis, who advocates for Sarah…).
  • You still need to submit some kind of review to HR to comply with company policy (for the time-being).
Why would people want to participate? We predicted benefits for employees such as:
  • Learning how to obtain and use feedback to really improve
  • Obtaining feedback on progress more frequently than once a year
  • Obtaining autonomy support from executive leaders to achieve his or her goals
And for advocates:
  • Helping a colleague grow and enjoy working at the company (thus experiencing naches and kvell)
  • Get to better know a colleague
The professional development team would support advocates and their people via training and tools, such as initial training, job aids and quarterly support sessions.
Rather than the employee’s manager writing the annual review, the advocate pairs with the employee to write it, so it’s open and transparent. But it also means that the employee owns it more. Working with executive management to find out what they were interested in, we offered a one-page template that included categories like stuff you’re proud of, stuff you want to improve on and how management can help you. The last item on the list is meant to be the basis for the autonomy-support meeting, an optional gathering facilitated by the advocate, the employee and relevant executive leaders. As Pink notes:
Researchers found greater job satisfaction among employees whose bosses offered “autonomy support.” These bosses saw issues from the employee’s point of view, gave meaningful feedback and information, provided ample choice over what to do and how to do it, and encouraged employees to take on new projects.

The results

Of the 210+ employees we had at the time we launched the pilot, 43 people participated in the program as employees and 65 total people, including executive leaders and advocates (some people participated both as employees and advocates) were involved. Because we ran the program alongside the legacy review program, we’ve had some confusion about the program elements (“Do I still have a performance review?”— no. “Do I still need to fill out peer reviews for people in the legacy program?” — yes.). But we’ve saved hundreds of hours in avoiding the traditional peer reviews, and management and employees have anecdotally reported satisfaction with the new setup. The freedom to obtain feedback on one’s own terms has led to some creative and individual approaches, and I’ve had the privilege of hearing firsthand some of the conversations that the emphasis on personal, face-to-face feedback has inspired. I participated in two autonomy-support meetings — one as an employee, one as an advocate for an employee — and was amazed at the strange new dynamic of true servant leadership that it generates. The program hasn’t been without its challenges, but we’re already improving the program to support as many people as want to do it, with the goal being everyone in the company advocating for each other and helping each other improve.
* while we find value in the items on the right, we value the items on the left more


Update: Lightning talk from London Lean Kanban Days 2016

The Kanban Iceberg [presentation]

Following are the slides from the talk that I recently presented to the Lean Kanban France 2014 conference. (It is nearly identical to the one I gave at the Lean Kanban UK 2014 conference.)