5 Metrics you Probably Haven’t Thought About
Here at Flomatika we spent a lot of time thinking and talking about Metrics and Analytics. What should we include? As important, what should we exclude? What is the best way to present the metrics and charts to make it easier to gain insights? How do we make some of our metrics and charts easy to interpret, but also allow the ability to conduct deeper analysis where required?
If you’ve spent some time thinking about performance metrics and analytics, a lot of what we have included in Flomatika might already be familiar to you: Lead Time, Throughput, WIP. These are quite commonly used in the industry with lots of tools and plugins out there showing these, and some do quite a good job.
One of our aspirations here at Flomatika is to build a best-of-breed Analytics Dashboard. In the pursuit of this, we asked ourselves what are some analytics we could introduce that would really make a difference in how we approach delivery performance? The data already resides in your work management system. It is a matter of slicing it and presenting it in a way that you could gain some additional insights.
In our journey we introduced a number of Analytics we haven’t seen available elsewhere, and we want to share it with you! So without further delay, here are some Analytics you probably haven’t thought about.
SLE & Target Met
SLE stands for Service Level Expectation. Not to be confused with Service Level Agreement (SLA). This is our take to assist teams in shifting their thinking towards a more customer-centric approach.
For any team, you’re providing a service to a customer. That customer might be an external end user, or it might be to another internal team. When a service is being provided, along comes with it expectations. Quality of the service is one, but in this case we’re talking about speed of value. How soon can I expect that service to be delivered?
This expectation may be implicit. When I order a pizza, I expect it to arrive within a time that the pizza is still hot. What we want to do is make this expectation explicit. Both the customer and service provider are clear on what that service level expectation is. When I raise a bug, I can expect the team to complete it within 3 days, using the example below.
There is nothing quite like setting a target to aim for and being able to hit it to drive performance improvements. The sense of achievement, and probably the release of endorphins that goes along with it. It’s therefore important to show, to what degree we are meeting this target? It’s not a binary state of yes, or no. How close are we to meeting this expectation? How much more do we need to go to hit this target? In addition, the sense of progress towards achieving this target is a positive feedback loop that teams can use to keep motivated.
We’ve also seen different approaches in how teams can use target setting for performance improvements. One is to set the SLE to what they know the customer expects. Then push to achieve those expectations. Another approach has been to set the SLE as what they are currently able to provide (so it’s clear to customers what to expect), then aim to continuously be able to lower the SLEs.
Low variability in your data means higher predictability in your delivery, which will then translate to higher trust and confidence in your teams to be able to deliver.
Do you prefer a team that is slow but predictable? Or do you prefer a team that is unpredictable, but will occasionally deliver at speed? For Enterprises, I would guess it's predictability. With a predictable team, you can have higher confidence synchronising all the moving parts in your organisation together. For example having that marketing campaign launch at the same time that the new shiny Feature Set is ready for release.
How is variability calculated? The explanation to that is probably worth its own blog, but the quick version is in understanding the shape of your distribution on your histogram: Is it a Fat-tailed or Thin-tailed distribution? I always struggled to remember which one of these is the preferred distribution to have, and I assume it is equally difficult to remember for any non-statisticians. So we’ve tried to simplify it for you in Flomatika by labelling Predictability as High or Low. Kanban University has a poster with some more details if you’re interested in knowing a bit more.
A byproduct of having High Predictability, also means probabilistic forecasting techniques such as Monte Carlo simulations will result in a lower variance in your forecasting. Giving you higher confidence if that new feature set will be ready in time for your marketing campaign, or if you should delay it.
You’ll notice we have a lot of trend arrows across Flomatika. We feel it is important, at a glance, to understand if you are trending upwards or downwards. It provides a positive (or negative) feedback loop to the teams. Do we need to conduct deeper analysis to ask some questions to find out more?
- Is the trend what we are expecting?
- Are we heading in the right direction?
- Are the changes we introduced working in the desired manner?
- Are we now slipping in the areas we've previously been doing well as we have lost some focus?
These arrows are located in Trend Analysis widgets (see below); in several views of the Summary Table on the Executive Summary page (see above); as well as the Predictability widget on the Lead Time page (see above). They show the trend across numerous metrics, such as Lead Time, Throughput, WIP, Inventory Age, Service Level Expectations Target Met.
Key Sources of Delay
Where in your workflow are items spending the majority of their time? A great way to reduce your lead time is to understand what is the longest queue in your workflow and see how those can be reduced. There is only so much team members can do to ‘code faster’ or ‘test faster’ or ‘pay that invoice faster’. From an overall workflow perspective, the biggest opportunities lay in reducing the queues in the workflow.
This can then be adopted as a systematic approach to improving your lead time performance. Once your longest queue has been addressed, what is the next longest queue? Start focusing on reducing that. Then rinse and repeat.
Perhaps the ability to reduce some of these queues are outside the control of your team? Perhaps many other teams are also experiencing the same constraints in reducing similar ‘queue’ steps in their workflows. Flomatika would be able to assist in building a case by providing the numbers to justify the investment in introducing broader range initiatives. Such as the introduction of CI/CD, Test Automation, cross-functional team roles, Scrum of Scrums, etc.
Key Bounce Back Step
This highlights the most common backwards movement in your workflow. A common example is an item that has moved to ‘Testing’, issues have been found and it’s moved back to ‘Development’. If this happens repeatedly, it’s an indication that there might be a systemic problem. Is the Developer not doing enough testing themselves and is overly reliant on the Tester to catch any issues? Has there constantly been a difference of understanding between the Developer and the Tester? By identifying the Key Bounce Back Step, it potentially exposes an opportunity for the team to be able to introduce some changes to help them improve the overall flow of their system.
Here are just some of the metrics we’ve been thinking about in Flomatika, and we haven’t encountered these on other platforms. Hopefully you have found this insightful and it inspires you in new ways to approach your performance analysis.
There are many more we have been thinking about and have enough for a follow up blog. If you enjoy reading this, let us know so we look to expedite Part 2: 'More advanced metrics you probably haven’t thought about'!