Service-Delivery Review: The Missing Agile Feedback Loop?

I’ve been working for many years with software-delivery teams and organizations, most of which use the standard agile feedback loops. Though the product demo, team retrospective and automated tests provide valuable awareness of health and fitness, I have seen teams and their stakeholders struggle to find a reliable construct for an important area of feedback: the fitness of their service delivery. I’m increasingly seeing that the service-delivery review provides the forum for this feedback.

What’s the problem?

Software delivery (and knowledge work in general) consists of two components, one obvious — product — and one not so obvious — service delivery.  I’ve often used the restaurant metaphor to describe this: When you dine out, you as the customer care about the food and drink (product) but also how the meal is delivered to you (service delivery). That “customer” standpoint is one dimension of the quality of these components — we might call it an external view. The other is the internal view — that of the restaurant staff. They, too, care about the product and service delivery, but from a different view: Is the food fresh, kept in proper containers, cooked at the right temperatures, and do the staff work well together, complement each other’s skills, treat each other respectfully (allowing for perhaps the occasional angry outburst from the chef, excusable on account of “artist’s temperament”!). So we have essentially two pairs of dimensions: Component (Product and Service Delivery) and Viewpoint (External and Internal).
feedback-quad-chart.001
In software delivery, we have a few feedback loops to answer three of four of these questions and have more-colloquial terminology for that internal-external dimension (“build the thing right” and “build the right thing”):
feedback-quad-chart.002
The problem is that we typically don’t have a dedicated feedback loop for properly understanding how fit for purpose our service-delivery is. And that’s often equally the most vital concern for our customers — sometimes even more important than the fitness of the product, depending on whether that’s the concern of a delivery team or someone else. (One executive sponsor that I worked with noted that he would rather attend a service-delivery review than a demo.) We may touch on things like the team’s velocity in the course of a demo, but we lack a lightweight structure for having a constructive conversation about this customer concern with the customer. (The team may discuss in a retrospective ways to go faster, but without the customer, they can’t have a collaborative discussion about speed and tradeoffs, nor about the customer’s true expectations and needs.)

A Possible Solution

The kanban cadences include something called a Service-Delivery Review. I’ve been incorporating this to help answer teams’ inability to have the conversation around their service-delivery fitness, and it appears to be providing what they need in some contexts.
feedback-quad-chart.003
David Anderson, writing in 2014, described the review as:
Usually a weekly (but not always) focused discussion between a superior and a subordinate about demand, observed system capability and fitness for purpose Comparison of capability against fitness criteria metrics and target conditions, such as lead time SLA with 60 day, 85% on-time target Discussion & agreement on actions to be taken to improve capability
The way that I define it is based on that definition with minor tweaks:
A regular (usually weekly) quantitatively-oriented discussion between a customer and delivery team about the fitness for purpose of its service delivery.
In the review, teams discuss any and all of the following (sometimes using a service-delivery review canvas):
  • Delivery times (aka Cycle/Lead/Time-In-Process) of recently completed work and tail length in delivery-time distribution
  • Blocker-clustering results and possible remediations
  • Risks and mitigations
  • Aging of work-in-progress
  • Work-type mix/distribution (e.g., % allocation to work types)
  • Service-level expectations of each work item type
  • Value demand ratio (ratio of value-added work to failure-demand work)
  • Flow efficiency trend
These are not performance areas that teams typically discuss in existing feedback loops, like retrospectives and demos, but they’re quite powerful and important to having a common understanding of what’s important to most customers — and, in my experience, some of the most unnecessarily painful misunderstandings. Moreover, because they are both quantitative and generally fitness-oriented, they help teams and customers build trust together and proactively manage toward greater fitness.
feedback-quad-chart.004

Service-delivery reviews are relatively easy to do, and in my experience provide a high return on time invested. The prerequisites to having them are to:

  1. Know your services
  2. Discover or establish service-delivery expectations

Janice Linden-Reed very helpfully outlined in her Kanban Cadences presentation the practical aspects of the meeting, including participants, questions to ask and inputs and outputs, which is a fine place to start with the practice.


Afterward #1: In some places I’ve been, so-called “metrics-based retrospectives” have been a sort of precursor to the service-delivery review, as they include a more data-driven approach to team management. Those are a good start but ultimately don’t provide the same benefit as a service-delivery review because they typically don’t include the stakeholder who can properly close the feedback loop — the customer.

Afterward #2: Andy Carmichael encourages organizations to measure agility by fitness for purpose, among other things, rather than practice adoption. The service-delivery review is a feedback loop that explicitly looks at this, and one that I’ve found is filling a gap in what teams and their customers need.


Afterward #3: I should note that you don’t have to be in the business of software delivery to use a service-delivery review. If you, your team, your group or your organization provides a service of any kind (see Kanban Lens and Service-Orientation), you probably want a way to learn about how well you’re delivering that service. I find that the Service-Delivery Review is a useful feedback loop for that purpose.


[Edited June 12, 2017] Afterward #4 (!):  Mike Burrows helpfully and kindly shared his take on the service-delivery review, which he details in his new book, Agendashift: clean conversations, coherent collaboration, continuous transformation:

Service Delivery Review: This meeting provides regular opportunities to step back from the delivery process and evaluate it thoroughly from multiple perspectives, typically:
• The customer – directly, via user research, customer support, and so on
• The organisation – via a departmental manager, say
• The product – from the product manager, for example
• The technical platform – eg from technical support
• The delivery process – eg from the technical lead and/or delivery manager
• The delivery pipeline – eg from the product manager and/or delivery manager

I include more qualitative stuff than you seem to do, reporting on conversations with the helpdesk, summarising user research, etc


Book Review: Actionable Agile Metrics for Predictability

Screen Shot 2015-04-29 at 6.46.56 PMDaniel Vacanti’s new book, Actionable Agile Metrics for Predictability, is a welcome addition to the growing canon of thoughtful, experience-based writing on how to improve service delivery. It joins David Anderson’s (Kanban: Successful Evolutionary Change for Your Technology Business) and Mike Burrows’s (Kanban from the Inside) books in my list of must-reads on the kanban method, complementing those works with deeper insight into how to use metrics to improve flow.

Daniel’s message about orienting metrics to promote predictable delivery and flow — which he defines as “the movement and delivery of customer value through a process” — is primarily grounded in his experience helping Siemens HS. He includes the case study (which has been published previously and is valuable reading in itself) at the end of the book, so he keeps the rest of the book free from too many customer references, though he’s drawing on the pragmatic experience.

As someone who for several years has been helping teams and organizations improve using the metrics Daniel talks about, I learned a tremendous amount. One of the reasons is that Daniel is particularly keen to clarify language, which I appreciate not only as a former English major (nor as a pedant!), but because it helps us carefully communicate these ideas to teams and management, some of whom may be using these metrics in suboptimal ways or, worse, perverting them so as to give them a bad name and undermine their value. Some examples: The nuanced difference between control charts and scatterplots and clear definitions on Little’s Law (and violations thereof), especially as related to projections and cumulative flow diagrams. I certainly gained a lot of new ideas, and Daniel’s explanations are so thorough that I suspect even novice coaches, managers, team leaders and team members won’t be overwhelmed.

As for weaknesses, I felt that the chapter on the Monte Carlo method lacked the same kind of depth as the other chapters. And I came away wishing that Daniel had included some diagrams showing projections using percentiles from scatterplot data. But those are minor plaints for a book that constantly had me jotting notes in my “things to try” list.

Overall, I loved how Daniel pulled together (no pun intended), for the purpose of flow, several metrics and tools that have often been independently implemented and used and whose purpose— in my experience — was not completely understood. The book unifies these and helps the reader see the bigger picture of why to use them in a way I had not seen before. If you’re interested in putting concepts and tools like Little’s Law, cumulative flow diagrams, delivery-time scatterplots and pull policies into action, this book is for you.

Other observations:

  • The book has a very helpful and clarifying discussion of classes of service, namely the difference between using CoS to commit to work (useful) and using it to prioritize committed work (hazardous for predictability).
  • It also had a particularly strong treatment of cumulative flow diagrams.
  • Daniel does a lot of myth debunking, which I appreciate. Examples: work items need to be of the same size, kanban doesn’t have commitments.
  • The tone is firm and confident — you definitely know where Daniel stands on any issue — without being strident.

What is Autonomy Support?

I wrote a few weeks ago about the advocacy program, our distributed peer-to-peer continuous-improvement program. One of the important components of the program is autonomy support. But what is that? As Daniel Pink notes in his book Drive:

Researchers found greater job satisfaction among employees whose bosses offered “autonomy support.” These bosses saw issues from the employee’s point of view, gave meaningful feedback and information, provided ample choice over what to do and how to do it, and encouraged employees to take on new projects.

In the advocacy program, autonomy-support meetings are an optional opportunity for employees to meet with executive management to give feedback on how the executive leaders can help the employee realize career goals in the organization. The meeting can be scheduled by the employee’s advocate, who also can be part of the meeting, acting as an intermediary or ambassador for the employee to the manager(s). Multiple managers may be part of the meeting, depending on which ones the advocate and employee feel are vital and able to help.

The dynamic should be one in which the traditional organizational structure is flipped upside-down:

Autonomy-support meetings

Therefore, rather than the traditional dynamic of the employee “working for” the manager, in the autonomy-support meeting the servant-leader — in this case, the role of executive leader — should have the mindset of “working for” the employee.

A good starting point for the discussion is the “autonomy-support feedback for executive leaders” section of the employee’s review. Basically, it’s whatever the employee needs executive leaders to do in order to do his or her job better or reach goals. This might be a request for a different project or role switch, more time to explore a particular skill or technology or simply clearer vision or expectations set. Premised on the executive leader’s commitments to the employee, the employee has the right to ask for the executive leader for support in various career-development goals, including timelines for when those things would occur.

Questions that the executive leader might want to ask:

  • How can I help you realize your goals in the next year?
  • By when would you like me to achieve these things for you?
  • In what areas have I failed to help you in the past, and how can I improve?
  • What kind of things would help you feel more engaged?
  • How can I help smooth your path toward mastery of certain skills?
  • What does success look like for you, and how can I help you succeed?

The peer-to-peer feedback-improvement cycle in action

I recently posted about the new peer-to-peer continuous-improvement program that we’re doing at Asynchrony. One of the key aspects of the program is its emphasis on feedback. Direct feedback is often difficult to deal with because people aren’t accustomed to it (especially if it’s critical), so we encourage face-to-face discussions in safe environments. Here’s a real conversation (names changed) that occurred over instant messaging between an employee (“Will”) and his advocate (“Jessica”), posted with their permission. Will has just provided his advocate with his self-review (a one-page overview of recent accomplishments, future goals and ways to improve) and has been collecting feedback from some of his personal stakeholders, which he has shared with Jessica.

Will:

What did you think of my self-review?

Jessica:

I think it was very in depth and really liked the explanation of why you did what you did but Pete raised a good question as to how you like to feel like the smartest person in the room… I think we should maybe address that in goals for 2014… Please advise

Will:

That’s something I’d like us to talk through. I think its good feedback but its also foreign to me. It feels like telling a razor to be less sharp so the other razors don’t feel bad. I was brought up and live in an environment where all the razors are helping each other be as sharp as they can.

Jessica:

We definitely need to work on being humble then… lol

Will:

I’m GREAT at being humble. 🙂

Jessica:

There is another saying, if you are the smartest person in the room… Maybe you should exit stage left…

Will:

I never claim to be the smartest person in the room.

Jessica:

Sometimes we don’t have have to say things to make that impression.

Will:

Yep. This is something we should talk about over a beer. Are you free after work today or some day this week?

Jessica:

Tomorrow night?

Will:

Sounds good

The two peers invited the colleague who provided feedback (“Pete”) to chat in person, and they had a very productive discussion to clarify the feedback and collaborate on ways to improve.

This is how we designed the advocacy program to work: People obtain feedback, the advocate holds him or her accountable and the two discuss improvement face-to-face. You’ll notice that the advocate doesn’t possess any specialized skills in career development or domain-specific expertise; she merely acts to reinforce the accountability loop by deftly supporting Will and at the same time not simply coddling him in the name of “advocacy.” Politely asking a colleague — especially one who has given you explicit permission — how he or she plans to address important issues is something everyone can do.

All three people in the scenario above were honest and respectful with each other: the employee honestly wants to improve, the third person has given honest, respectful feedback and the advocate honestly and respectfully responds when the person for whom he is advocating. Contrast this with the less-than-respectful way that anonymous peer reviews can come off and the sometimes-perverse incentives that people have for being interested in feedback in the first place (as a means to the end of making a case for a raise), and you see that this is a very different dynamic from how many organizations operate.


An agile approach to the traditional performance review

How do you fix a broken legacy performance review system that gives people feedback only once a year and is directly tied to compensation? That is a challenge that we at Asynchrony undertook recently with the guidance of modern social science and — as you might expect from an organization infused with agile thinking — with agile principles.
First, the situation: We’ve tried to make the best of a traditional annual review system over the years, but we’ve come to the end of our rope. We realize it’s not working. That’s because of these dysfunctions:
  • Feedback is delayed (it “officially” happens only annually).
  • Feedback is impersonal (given anonymously through an electronic tool) and provided by people who may not have worked with you for months.
  • Reviews are given by managers who are forced to infer from and translate written feedback from others out of context.
  • The compensation adjustment is tied to the review, so the review is more of a leverage tool and not necessarily honest assessment of improvement needs and goals.
In short, it was a fairly traditional performance-review system, common to many organizations.

“It’s not about the review”

Reviews shouldn’t be the end; they should be the means to the end of improvement. Two structural aspects of the traditional, compensation-dependent reviews precluded real improvement: First, the length of the feedback loop — a year — is much too long for useful, actionable feedback. Second, tying compensation to the review by almost by definition prioritizes the review over improvement. And it encourages people to view their work as a means to the end of the external reward of compensation over the intrinsic reward to mastery the role improvement plays in it.
Most critically, as modern social science tells us, the linkage between compensation and performance creates an unhealthy crowding out of people’s intrinsic desire to improve. As Daniel Pink points out in Drive: The Surprising Truth About What Motivates Us, describing an experiment,
Adding a monetary incentive didn’t lead to more of the desired behavior. It led to less. The reason: It tainted an altruistic act and “crowded out” the intrinsic desire to do something good.
So we understood the problem — but how would we solve it?

How we approached it

We decided to approach the problem with a safe-to-fail experiment (as a limited-participation, voluntary pilot) focused on:
  • Continuous, personal feedback (rather than once annually)
  • Improvement as a means toward mastery and for its own sake (rather than a raise)
  • Peer relationships (rather than hierarchical)
We spent some time designing a simple but profound  alternative program that we could run alongside our legacy program and then invited volunteers to participate. We called the new thing the Advocacy Program. In order to baseline the current state and validate our widespread anecdotal belief about our colleagues’ dissatisfaction with the review and feedback process, we surveyed the company with a single Net-Promoter-style question: How satisfied are you with our current process? The results? Let’s just say that a company whose product registered such an NPS score would be out of business. The good news was that we had lots of room to improve!

How we designed it

matt-graphic-2Inspired by agile principles like building projects around motivated individuals, face-to-face communication, simplicity, self-organization and reflective improvement, as well as recent research on motivation, we designed the program on the strength of peer relationships and the decoupling of compensation from review and improvement. Our professional-development team had previously created a kind of manifesto to guide our thinking, valuing*:
  • Intrinsic motivation over extrinsic motivation
  • Autonomous career management over prescriptive management
  • Individual identity over role identity
  • Lattice-shaped career paths over a ladder-shaped path
The result was an ongoing peer-to-peer relationship in which an employee guides a colleague in career options and feedback and advocates for career growth. Rather than Human Resources surveying various teammates of an employee from the past year, the employee would be responsible for gathering and incorporating feedback on an ongoing basis. Rather than HR scheduling a review with the employee’s manager, the employee would have the option to have his or her advocate schedule an autonomy-support meeting with relevant executive leaders.
Some rules:
  • You can advocate for only one person at a time, and you can’t have a reciprocal advocating relationship: This keeps the burden of advocating manageable for any one person and at the same time creates “chains” of people advocating for each other across the company (e.g., Mary advocates for John, who advocates for Luis, who advocates for Sarah…).
  • You still need to submit some kind of review to HR to comply with company policy (for the time-being).
Why would people want to participate? We predicted benefits for employees such as:
  • Learning how to obtain and use feedback to really improve
  • Obtaining feedback on progress more frequently than once a year
  • Obtaining autonomy support from executive leaders to achieve his or her goals
And for advocates:
  • Helping a colleague grow and enjoy working at the company (thus experiencing naches and kvell)
  • Get to better know a colleague
The professional development team would support advocates and their people via training and tools, such as initial training, job aids and quarterly support sessions.
Rather than the employee’s manager writing the annual review, the advocate pairs with the employee to write it, so it’s open and transparent. But it also means that the employee owns it more. Working with executive management to find out what they were interested in, we offered a one-page template that included categories like stuff you’re proud of, stuff you want to improve on and how management can help you. The last item on the list is meant to be the basis for the autonomy-support meeting, an optional gathering facilitated by the advocate, the employee and relevant executive leaders. As Pink notes:
Researchers found greater job satisfaction among employees whose bosses offered “autonomy support.” These bosses saw issues from the employee’s point of view, gave meaningful feedback and information, provided ample choice over what to do and how to do it, and encouraged employees to take on new projects.

The results

Of the 210+ employees we had at the time we launched the pilot, 43 people participated in the program as employees and 65 total people, including executive leaders and advocates (some people participated both as employees and advocates) were involved. Because we ran the program alongside the legacy review program, we’ve had some confusion about the program elements (“Do I still have a performance review?”— no. “Do I still need to fill out peer reviews for people in the legacy program?” — yes.). But we’ve saved hundreds of hours in avoiding the traditional peer reviews, and management and employees have anecdotally reported satisfaction with the new setup. The freedom to obtain feedback on one’s own terms has led to some creative and individual approaches, and I’ve had the privilege of hearing firsthand some of the conversations that the emphasis on personal, face-to-face feedback has inspired. I participated in two autonomy-support meetings — one as an employee, one as an advocate for an employee — and was amazed at the strange new dynamic of true servant leadership that it generates. The program hasn’t been without its challenges, but we’re already improving the program to support as many people as want to do it, with the goal being everyone in the company advocating for each other and helping each other improve.
* while we find value in the items on the right, we value the items on the left more

The Kanban Iceberg [presentation]

Following are the slides from the talk that I recently presented to the Lean Kanban France 2014 conference. (It is nearly identical to the one I gave at the Lean Kanban UK 2014 conference.)


What is a Flow Manager?

Cover image from David Anderson's Kanban book. The character on the right is the persona for a flow manager, in my conception.

Cover image from David Anderson’s Kanban book. The character on the right is the persona for a flow manager, in my conception.

Agree to pursue incremental, evolutionary change.

— one of the foundational principles of the kanban method

Manage flow.*

— one of the six core practices of the kanban method

How do teams continually improve and mature toward greater fitness for purpose? That’s something we at Asynchrony are keen to discover.  One recent experiment to catalyze improvement that we’ve started at Asynchrony is the trial of what we think is an emerging role in our particular work culture: flow manager.

We actually got the idea from Christophe Achouiantz and his experience with kanban at Sandvik IT. Earlier this year, he wrote that:

The flow manager’s role does not actually exist in the Kanban method (there are no prescribed roles whatsoever), but – in Sandvik IT’s context – we found out that there is the need for someone from within the team to take charge for the implementation to stick. The role has some similarities to a Scrum Master role, once removed from the project management aspects it often is loaded with. The purpose of the flow manager is to make the team reflect and act: follows the policies it has created, create new ones when needed, discuss and act on exceptions (issues and opportunities), experiments to find creative solutions, etc. The flow manager inspires, challenges and coaches. This role really is an extension of the coach and is meant to take over when the coach phases out.

That was a great starting point for us: Our team model includes team leads, who are often tech leads with project-management responsibilities. But because we are a (very) flat organization, we do not have the middle-management layer of many organizations where kanban is often best catalyzed. As David Anderson writes,

Kanban is a change designed to be led from the middle. Bottom-up initiatives tend to stall with only local improvements and boards that are best described as team (or personal) Kanban. Without middle-management support to improve service delivery to external customers there is generally no momentum to look at the whole workflow and focus on improving flow of work.

This accurately describes many of our teams, who focus on delivery — something we do well — but typically don’t often make time to “look at the whole workflow and focus on improving flow of work.” We have busy executive leaders and busy delivery-team members — but generally speaking we have no one “in the middle” to focus on improving service delivery per se. As David Mann writes in Creating a Lean Culture,

     … this [someone being available to respond right away] means focusing on the process as it operates from beginning to end, not only at the completed component or finished goods end. That’s why lean designs require so many team leaders to maintain the process, to spot problems in upstream intermediate or subprocess areas, and to respond right away to prevent or minimize missing takt [rate of customer demand] at the outlet end of the process. An integral part of the lean management system is having the appropriate number of team leaders on the floor to focus on the process. It requires a leap of faith not to scrimp on this crucial part of the system; having enough leaders available to monitor the process, react to problems, and work toward root cause solutions is an investment that pays off in business results. But at first, and from a conventional perspective, team leaders just look like more overhead.

Our teams are generally reliable at retrospecting: Teams have recurring retrospectives to celebrate successes and target improvements usually on a weekly or biweekly cadence. And we’ve developed a cadre of “volunteer” facilitators so that every team can benefit from an objective, trained facilitator. But even for teams who practice this weekly, it’s not the kind of daily, “on the floor” improvement leadership that Mann is talking about. (Our facilitators spend the vast majority of their time working in their own teams and interact with the team for whom they facilitate one hour a week, at most.)

Enter the flow manager

Our teams understand kanban as a visual-management practice. Nearly every team has a kanban board. But until recently most teams haven’t fully understood the values and benefits of the kanban method. New hires play the Get Kanban game. But the learning sometimes fades away once they start work and they get busy delivering. After conducting a depth-of-Kanban assessment for a few teams, we found that though people are indeed interested in improving, they don’t necessarily know where to start — or how to continue. Earlier, I used “mature toward greater fitness for purpose” rather than “evolve toward” intentionally: In my experience, teams don’t evolve (which implies a kind of passive or natural course) so much as they intentionally mature toward bettering themselves and their delivery. It’s a matter of intentionality. We needed someone to provide that intentional and disciplined approach to improvement, the flow manager. We see the flow manager as a catalyst of change within teams that believe themselves too busy to focus on improving. Mann also writes about how bad habits are less “broken” than they are “extinguished.” We see the flow manager as a bad-habit extinguisher.

So what would you say you … do?

First, a word on the title: “Manage Flow” is one of the Core Practices of Kanban Method, so it made sense to emphasize that. With Christophe’s description as inspiration, we enumerated standard work for the role:

Frequency Action
Project kickoff Preview with team the depth-of-kanban assessment, educate on values
Works with the team, customer and stakeholders to identify fitness criteria
Visibly publish as explicit policies shared expectations on work item selection and quality criteria
Daily At standup, ask: Are you seeing flow?
Track where blockages occur and conduct root-cause analysis when they do
Help team follow the policies it has created (e.g., WIP limits and work selection) and discuss and act on exceptions to policies (issues and opportunities) at after-standup meeting
Ensure that all of the team’s work items are organized visually by type, state, parallel work stream and class of service
Help team to size and select work items to optimize economic outcomes
Weekly Work with executive sponsor to communicate and remove system-wide blockers
Oversee improvement experiments (clearly state hypothesis, measurements, timings) identified at retrospectives
Work with retrospective facilitator to schedule regular meetings
Helps team create new or revise existing policies when needed
Report and make visible progress, demand and capability externally, both to customer and the wider organization (cycle times, flow efficiency, percent accurate and complete)
Monthly Facilitate operations review within customer “division”
Ensure that the team’s metrics have a clear relationship to the system’s purpose
Every 1-6 months Conduct depth-of-kanban assessment

As such, flow manager is a servant-leader role. As Mike Burrows writes in Kanban from the Inside, leadership can be as simple as looking at the work board and asking:

  • What is stuck today?
  • Are you seeing flow?
  • Where do blockages repeatedly occur?
  • Why is that?

Our teams generally practice standup meetings and are self-organizing enough to ask the first question. But beyond identifying what is stuck, teams don’t focus on the other, just-as-important, questions. Mike asks, “What if this kind of leadership doesn’t come naturally to your organization? This is where the flow manager helps.

End notes

I like how Christophe stresses the “in our context.” As with any kanban implementation, you need to respect current roles. Flow manager may not be helpful in your context; it may actually be harmful! I mentioned that we’re viewing the pilot of this role as an experiment, so be open to the possibility of it not working. But use some quantitative way to assess whether it does or doesn’t. (We started with the kanban assessments to benchmark.)


*The flow of work items through each state in the workflow should be monitored and reported – often referred to as Measuring Flow. By flow we mean movement. We are interested in the speed of movement and the smoothness of that movement. Ideally we want fast smooth flow. Fast smooth flow means our system is both creating value quickly, which is minimizing risk and avoiding (opportunity) cost of delay, and is also doing so in a predictable fashion.