Value Stream Reflection Mechanisms
Are you ready to optimise your delivery ecosystem today?book demo
We know strong Value Streams have distinguished strategies to improve efficiency and effectiveness. So let’s explore key Value Stream events that happen on a regular cadence to create alignment, rhythm and process feedback, signals, and tensions inside and outside of a Value Stream.
Reflection Mechanisms to improve effectiveness
During strategy reviews, Value Streams will look where the product or service as a whole is heading, taking into account customers, stakeholders, the market landscape, risks and opportunities, as well as current demands and capabilities.
Value Stream leaders will ask themselves questions like ‘Are we heading in the right direction?’, ‘Do we need to course-correct?’, ‘What’s our next big bet?’ What makes this process exhilarating is the fact that digital Value Streams allow convex asymmetric payoffs. Hence, the Value Stream is always on the verge of potentially achieving something big.
At the end of the Strategy Review, a medium to long-term view will be defined or reiterated.
They might also take the opportunity to review their service catalogue, choosing to introduce new services or decommission old ones. They might also make strategic investment decisions to support necessary step changes in service level.
Strategy reviews usually happen quarterly.
The daily course of a Value Stream operation can be a bit messier than what we expect. While producing new value for customers and stakeholders, Value Streams need to stop here and there to fix unplanned defects and incidents, refactor existing areas of products to improve maintainability, evolve infrastructure capabilities, improve performance, execute security and compliance requirements, and provide support to functional areas.
At any given point in time, a Value Stream will have several competing priorities fighting for capacity. Among them are loads of urgent and important demands—some strategic, some operational. Finding among all, which one is the next right one, is highly challenging and can also be what separates your product or service from the pack.
Here is something important to note: effectiveness can’t be determined a priori. While strategy meetings help set North Stars, what really ensures effectiveness is daily execution. The effectiveness of strategies is only a presupposition. Actual effectiveness comes from a series of contextual and temporal micro-decisions taken daily by teams and individual contributors on the ground. As they say, execution is King.
What is right today might not have been right last week and might not be right next week. For instance, what is right today depends if Team Alpha has finished or not a key dependency on the critical path of the product. What is right today depends on who has just become available: Maria, Jackson, or Diana.
That’s why replenishment meetings are so essential to help ensure effectiveness. They serve the purpose of scanning the pool of options and separating the best options from the good options, and choosing which are the next right ones to start.
To increase the chances of getting it right more often than not, you’ll need as much big picture thinking and situational awareness as you can get. That’s why these decisions are made by teams, leveraging the wisdom of the crowd, not by HiPPOs (highest paid person's opinion).
The frequency of Replenishment Meetings usually varies from once a week to daily, depending on the team's batch size and flow of work.
With Value Streams typically operating under capacity, using daily meeting time for individuals to report status and progress is certainly a misuse of capacity. Agile teams operating in strong Value Streams leverage a range of asynchronous communication tools such as Slack and real-time information coming from their VSM platforms for that purpose.
When well-executed, daily meetings are both a tool to ensure effectiveness and efficiency. Teams use such opportunities to find sooner when they are operating under different assumptions, when people are going off-track, spinning the wheels, or blocked by something outside of their control.
It’s also an opportunity to handle unplanned work and to discuss the best way of completing the work currently in the system within the customer’s service level expectation.
Reflection Mechanisms to improve efficiency (or Service Level)
Service Delivery Review
Golden rule: Service Delivery Reviews (SDR) are contextual and temporal. It happens in the context of one service and always looks at the last period, whatever it is, weekly, fortnightly or no later than monthly.
During an SDR, we look at the completed work (past), work-in-process (present) and the profile of upcoming work (future). The purpose is to:
1. Assess to which degree the Value Stream has been meeting customers and stakeholders SLE and debrief. Teams commonly look at:
- Lead time and its predictability
- Productivity and its predictability
- Quality and its predictability
- Flow Efficiency
2. If SLE were missed for any completed work, they would debrief on why and what can be done to avoid it happening again. Typical reasons include:
- Uncontrolled WIP
- Large batch size
- A burst of unplanned work
- Unresolved dependency
3. Discuss whether each work-in-process (WIP) is likely to meet the customer's SLE and what can be done to maximise the chances of fulfilling it.
4. Look at vital leading indicators to understand whether the delivery system as a whole is improving, has plateaued, or is degrading. That includes:
a. Flow Efficiency
- Visualise key sources of delay and ask how might we (HMW) questions to eliminate or reduce them
b. Flow Debt
- WIP Count
- WIP Age
- % of work stale
d. Capacity vs Demand (Are we accumulating Inventory?)
- Inventory Age
- Time to commit
- Commitment Rate
- % of work likely to be late
e. InFlow vs OutFlow (Are we accumulating WIP?)
f. Discard rates (before and after starting)
5. Analyse the distribution of sources and types of demands being processed and how that matches strategic governance targets. That should inform replenishment meetings ahead.
6. Analyse profile of work on Inventory
7. Review explicit policies
A. Operations Review
Operations Review (OR) serves a similar purpose, but it looks at a higher level of analysis. While SDR looks at nodes of Value Streams, Operations Reviews looks at the full customer-facing end-to-end Value Stream. The difference? A sharp focus on queues between services targeting global optimisation.
Operations Review happens less often than Service Delivery Reviews. If you’re running a monthly SDR, you might consider a quarterly OR. If weekly SDR, monthly OR.
B. Risk Review
The purpose of Risk Reviews is explicitly stated. It typically follows the cadence of SDRs and ORs. The scope of risk analysis is contextual and might include, for instance, risks of:
- Compromising customer and stakeholder SLEs
- Missing deadline for demands that have a fixed delivery date
- Compromising security and customer data
- Breaching compliance with norms and regulations
- The accumulation of technical debt
Risks are always measured and recorded in two dimensions:
- The likelihood for the risk to actually happen
- The impact should the risk materialise
The impact is usually measured in terms of time required to remediate each risk should it materialise. The sum of the product of the impact (in days) of each risk recorded multiplied by its likelihood of occurring becomes the risk exposure, which becomes a liability to be managed.
Typically risks with high-impact or high-likelihood have mitigation plans associated with them.
At every Risk Review session, Value Streams:
- Review existing risks
- Have we managed to reduce likelihood?
- Have we managed to reduce impact?
- How are we progressing with the mitigation plans?
- Does it need to be escalated?
- Does it require a capital investment that needs to be approved?
- Remove risks that no longer should be monitored.
- Add newly identified risks, discussing likelihood, impact, and mitigation plan if necessary.