Metrics to Measure your Agile Transformation
We are living in a time where nearly every organisation is investing in transforming their ways of working to enable their shift to a digital paradigm.
Having worked in this industry for a number of years, I have witnessed both our transformation approaches mature, but also the customers' needs become more sophisticated. Organisations today need more than anecdotal evidence that their transformation efforts are succeeding. They are looking for tangible results to demonstrate their return on investment.
As either an agile coach or a transformation consultant engaging to, or within, an organisation to enable an uplift, it is prudent to plan ahead and design a coaching approach that will enable you to demonstrate results.
The approach I use for my ways of working uplifts is represented in the diagram below. It borrows heavily from the Elabor8 and Remote:AF team launch pattern IP we have available to us for our engagements.
Through my experience working with teams in large to medium sized organisations in Telco, Higher Education, Banking and Government over the years, I found that it is common management perception that it is a particular team, business unit or a function that needs improvement.
However, as a coach, it is essential that we broaden our focus to include not only the team, but also the system of work they are a part of to ensure the changes we introduce will be supported and sustained.
On one of my recent engagements for an organisation who operates in the share trading domain, the scope was to work with their delivery team to uplift their ways of working with the intention of enabling the team to scale at a rapid pace to cater for a massive business expansion planned.
Let’s explore the approach I followed to support them and demonstrate tangible outcomes.
Step 1: Team Baseline
A solid baselining approach helps you to build a very good relationship with the team you are working with and allows you to gain a deep understanding of their system of work, including;
- The types of work done by them and who demands work from them.
- How they receive, make sense of, prioritize, plan and complete work.
- The internal and external constraints that limit their ability to deliver work.
- The different interactions and handoffs involved.
- Whether they are clear on their purpose, roles, responsibilities and accountabilities.
What did I learn by baselining the delivery team?
- Purpose & Culture: The team had a healthy work-culture and a very strong sense of purpose focused on providing a best in class experience and support to their customers.
- Collaboration & Communication: The team was communicating well internally although there were some gaps with external communication. Work was being handed over by business stakeholders with limited collaboration which was a concern. There was a lack of transparency on progress of work requested expressed as a key pain point by business stakeholders.
- Sources of demand: The team was receiving work directly from many internal and external sources which had a detrimental impact on prioritisation and focus due to the constant shifts in priority.
- Tech debt: The team struggled with tech debt accrued over a long period of time and was also dealing with a significant support load due to System Administration being done through code.
- External dependencies: The team was dealing with unpredictability and complexity when committing to outcomes, often due to integrations with 3rd party suppliers.
- Events: The team was using daily stand ups to stay in sync, but not following a disciplined approach to plan and deliver. There was no focus on cadence of ceremonies or focus on flow of work. Most concerning was the lack of a continuous improvement mechanism such as retrospectives to reflect and improve.
- Readiness of Work: There was a lack of refinement and planning resulting in premature commitment to work. No process to assess “Readiness” of work which resulted in scope being discovered in-flight. This resulted in longer lead times to complete work and frequently missed commitments which were ultimately leading to customer dissatisfaction.
Step 2: Design System of Work
The next step involves re-designing the system to promote the Flow of work.
The intelligence gathered from the baseline is instrumental, and my preferred approach to this design process is to be collaborative. While it may be easier to use the data and my experience to re-design the system for optimal performance, the teams are much more likely to successfully adopt any change if they are a party to the redesign process.
What were the system design interventions the collaborative process identified?
- Visualising work and surfacing hidden waiting times through an End to End workflow
- Designing feedback loops closely aligned to workflow enabling collaborative decision making to move work forward
- Establishing a single view of priority through Product Ownership
- Refining work continuously, accumulating shared understanding of future work
- Breaking down work, planning work in small batches, limiting Work on Progress (WIP) to control context switching
- Designing interactions with dependent enabler system components e.g. support and enhancements requests to ensure softer handoffs
Step 3: Tooling setup for measuring results
The next step involves measuring the impact of the change. In days gone by this was often done very manually counting cards on walls by day, by week. We are very lucky to have sophisticated work management tools today to help us record, synthesise and visualise the data with very little manual effort.
In my experience, the right radiators showing system performance improving can have a very powerful reinforcing effect on the teams.
What Tools do you need?
- A work management system (We used Atlassian Jira) with a software project configured to reflect the end to end workflow we designed.
- A value stream management platform (We used Flomatika) - with an appropriate configuration and setup to track portfolio and team levels for service delivery performance and insights.
Step 4: Measure Key Performance Indicators
A key element of any system implementation is the ability to track Service Delivery Performance metrics. This enables you to demonstrate the impact of the changes you are bringing to the ecosystem, with a customer perspective.
This continues to pay-forward, particularly as it supports teams and organisations to adopt a data- driven approach for continued improvement through constant monitoring of service delivery health.
What did we measure?
- Lead Time: Time taken to complete each work item, measured against the Service Level Expectation (SLE) set, examining how long it takes for the team to deliver customer value
- Work in Progress: Amount of work the team has in-flight at a given time. Visualised the in and out flow of work and amount of context switching. Unhealthy levels of Work in Progress will result in low quality and slower delivery
- Throughput: The amount of work completed in a period of time. This can be used as a proxy for system productivity and the rate at which the team is creating customer value
- Quality Distribution: The composition of work done by the team in terms of new features (value), break-fix: (failure) and important work that needed to be done, but did not deliver direct customer value e.g. tech debt (non-value). Helped to validate if the team were working on the right things
- Flow Efficiency: Visualising how efficient the work flows through the system of work, highlighting key congestion points and waste in the system. This provided the ability to examine and unblock constraints in the delivery flow
There was a significant improvement in the team's ability to complete work within Service Level Expectation (SLE). The new system implementation helped us increase the visibility of the profile of work done through using different Jira work types enabling more visibility of how the capacity of the team was spent.
Less variance in delivery resulted in a shift from a low predictability to a high predictability lead time distribution which enabled the team to forecast future delivery with higher levels of confidence.
Considering 50% confidence level, lead time has improved by 20% (from 15 days to 12 days) and considering the 85% confidence level it has improved by 95% (from 280 days to 15 days).
Key drivers for this shift are improved shared understanding of work done and forward planning.
The new system implementation deliberately moved the commitment point further to the right of the workflow by introducing continuous refinement of the team's backlog of work. This ensured the scope of each work item was collectively understood by the team before being committed to delivery.
During Backlog refinement, the team now discusses and agrees on clear acceptance criteria for each work item. The work items that meet “readiness” criteria (aka Definition of Ready) are then prioritised for the next sprint. The team then commits to a batch of “Ready” work, which has been prioritised by the Product Owner during their fortnightly Sprint planning event.
This allowed the work to flow more freely and quickly as there is much less ambiguity on what needs to be achieved. The lost time waiting for clarifications and scope confirmations on inflight work decreased significantly, resulting in reductions in Leadtime.
Work in Progress
The new ways of working constraints did not allow high levels of WIP as have been tolerated previously. The team actively managed WIP between 40 - 50 items with a cap of 56 items even while team size was expanded.
Prior to WoW uplift, the number of items in progress had been fluctuating between 150 - 200 with a maximum of 236 items in one instance.
Since the WoW transformation the average number of days a work item spent in process until it was completed (50th percentile) decreased from 163 days* to 5 days as a result of the transformation. This is a 97% decrease!
Pre- uplift number of 163 days was largely attributable to the high level of incomplete work accumulated by the team as a result of constantly shifting focus from inflight work to start new work to respond to priority changes.
The team has decided to leave behind their old backlog of 600+ items since it was near-impossible to prioritise and contained obsolete items no longer added any real value but still required their energy and attention to maintain. We started off a new backlog and brought across any items from the old backlog as needed.
By adopting a Scrum delivery process and strongly adhering to it, the team is now committing to much smaller batches of work based on capacity. Product Ownership ensures priorities do not shift mid-sprint unless it is absolutely necessary and business critical. This is highlighted by the lower number of WIP and the decrease of WIP age.
In any WoW transformation, it is prudent to artificially constraint commitment levels for an initial period (usually 2 sprints, i.e. month for a team running fortnightly sprints) to allow for the process changes to be implemented and adopted successfully.
Post systems implementation, there was a change in the work item definitions, to ensure that work items now represent slices of user value and have individual tasks (e.g. review tasks) rolled up into them, reducing the overall number of work items being tracked through the system, but giving greater visibility on how value flows through the system.
Post the Wow transformation value delivery immediately rebounded (55 items completed in July, 100 items completed in August).
It is difficult to compare pre and post transformation delivery metrics given changes in team sizes. By having this data available, it was useful to manage external stakeholder expectations of the transformation and give confidence about both the recovery and performance trends.
With the new system implementation, we could now clearly see that over 50% of the work that had been completed recently was to address failure demand (break-fix), and over 40% of the work currently in process (WIP) was also actively addressing failure demand.
These numbers strongly indicated the need to assess and address our Quality Assurance process, quality standards and invest in automation and as a result of this an Infrastructure and Automation team was launched to focus on automation and platform innovation, addressing platform debt, enabling delivery teams for frequent value release with higher quality and lower risk.
Considering the amount of time a work item was active, in the end-to-end workflow from the point it arrives (arrival point) until it is completed (departure point), there has been an improvement from 4.2% to 34% in flow efficiency.
This is the amount of time a work item is actively worked on while in the system, which as you can imagine results in an equal improvement in speed.
An industry average for flow efficiency in knowledge work teams is 15% so even with just through the initial change this team has doubled the industry average. Flow efficiency is a crucial indicator of how the overall system of work is performing as explained in this video on Flow Efficiency
As a result of detailed End to End workflow design, we were able to visualise hidden waiting times, allowing the team to examine each stage of the workflow, make improvements and continuously gain on this key measurement by eliminating waste in the process.
- Streamlined Ways of Working has improved predictability and efficiency of service delivery. The team is now committing to “ready” work in small batches with a lot more confidence.
- Enhanced clarity and transparency has unlocked a more accurate, reliable view of delivery performance through Flomatika.
- The team is now placing more importance on flow of work and amount of completed work compared to perceived efficiency in doing a “lot of work” in parallel.
- Insights on current levels of product quality was a key driver in the organisation’s decision to fast-track investment in an overall uplift of end- to- end quality practices from developer on-boarding to continuous integration & delivery optimisation.
- The organisation is continuing their journey in scaling using the delivery team recently launched as a repeatable pattern for success.
Our sponsor says:
“Elabor8 has set us up extremely well with the foundation to move forward including scaling to multiple teams. This partnership has been instrumental to achieving what we have to date and when I look back at where we started 3 months ago, we have matured significantly as a business. Thanks to Elabor8’s coaching and mentoring of the team”
This article was originally published on Elabor8’s blog about Coaching for results with Service Delivery Performance.