Resources / Blogs
Resources / Articles

Blogs

10 Reasons Why Enterprise Technology is Failing In The New Product Development Game

10 Reasons Why  Enterprise Technology is Failing In  The New Product Development Game

This blog describes the ten most common challenges I hear from business and tech leaders working in enterprise organisations worldwide through my role at Flomatika. It becomes clear that the most wicked challenges they face are common across countries, industries and domains. 

I believe that expanded awareness and understanding of these archetypal problems are vitally important in the work of transformation. The habits of our culture and ways of working can be challenged more effectively when we see them occurring. Analysing past behaviour is helpful, but it is not as powerful as observing the systemic behaviours in the present moment.

1. Long lead time

Technology has been continuously breaking down barriers between industries and markets, increasing competition and redefining what is possible. That has caused the lead time to satisfy customers' needs to become shorter, with a constant demand for more value delivered quickly.

Historically, most organisations have dealt with red tape, silos, internal politics, and regulations that weigh them down, slowing innovation and their ability to confront crises. Today, we are facing the most evident need we have ever seen for organisations in all industries to adapt and pivot quickly.

Organisations that want to survive this next normal will need to step up and start delivering value much quicker. Reducing their lead time to its absolute minimum should allow them to maximise the number of learning cycles, quickly adapting and converging to fit-to-purpose solutions in a world of ever-changing customer needs.

2. Lack of visibility across delivery groups and initiatives

We hear day in and day out Business and Tech leaders complaining about a lack of visibility of their initiatives.

“I have high visibility of my strategic initiatives up until they get into delivery. From there on, it is like a black box: no one knows what is happening inside of it or when the initiative will be customer-ready.”

The problem gets worse at scale, where there are usually dozens of initiatives in-flight competing for priority and resources. Not to say that the never-ending fixes, enhancements, optimisations, enablers and technical debts coming from all sorts of places with different sponsors and levels of urgency.

Also, the work is usually fragmented across different teams and specialities at scale. Each potentially using different tools, taxonomies, and workflows to represent and visualise their work, making it hard to create automated reports containing meaningful and actionable information.

3. Loose alignment between strategy and execution

With the inability to sense where the effort of hundreds of people working on software products is going, business and tech leaders see themselves unable to steer their programs, portfolios, or business units properly.

“Are we still working on shipping our key strategic drivers, or have we been taken (again) by the storm of unplanned work, enablers, and tech debt? I guess we'll know at the end of the PI.”

More often than not, we see organisations with way too much work in process, with only a fraction of it directly related to their strategic objectives.

4. Lack of a shared understanding and trustworthy data to measure performance and quantify improvements

Most organisations still don’t know or can’t agree on what to measure in terms of software delivery performance. 

Instead of measuring key performance indicators like Speed, Predictability, Productivity, Fitness Level, Customer Value, and Flow Efficiency, most teams are still lost in measuring Story Points and Team Velocity as a fundamental measure of performance. 

Organisations worldwide are making significant investments in transformations and continuous improvement initiatives but cannot quantify the performance improvements obtained by that investment. Instead, what most have is anecdotal and subjective.

5. Lack of a Governance Framework that supports both traditional and Value-Stream-based delivery models

Back in the day, when software products were still produced through projects, governance was established based on the iron triangle - scope, cost, and time.

As organisations are forced to become more adaptable and responsive, they are starting to adopt more of a product mindset, which, in some cases, are starting to question the need for projects at all. As a result, PMOs are being pressured to justify their existence. In addition, the relentless pursuit of cost-cutting, profit, and growth makes any function that does not directly add value a potential target for downsizing.

Besides, the movement to product teams brought an interesting governance challenge. The parameters used to define success (iron triangle) no longer exist. As organisations continue to transition to long-living product teams, they end up with a fixed cost for the year but open time and scope. Business leaders now know how much they're spending but are uncertain about what they are getting out of it.

In a healthy Value Stream, you could expect some breakthrough features that align with strategic drivers, organic fixes, enhancements and optimisations, and a better and more resilient Asset (code base).

On the contrary, business leaders could end up with teams spinning the wheels, several irrelevant product features, and a weakened and more fragile Asset, stifling the teams’ ability to produce value.

Some leaders are flying blindly because of the lack of visibility and alignment described in numbers 2 and 3. When that’s the case, they usually just have enough visibility when it is too late to change course.

6. Lack of ability to forecast and plan with confidence

Due to a gap of deeper understanding by Delivery Leaders and their crew of how Value Streams (queue systems) work, enterprise organisations are still not leveraging their data to forecast software delivery with a much higher confidence level.

Somewhat, they are still lost in the “for this User Story, how many story points you reckon?” type of approach, which is wholly based on whoever-is-estimating’s intuitive thinking – fast, automatic and emotional. 

That process of estimating is based on simple mental rules of thumb (“heuristics”) and thinking biases that result in estimations being heavily made based on impressions, feelings and inclinations. Moreover, they focus on what is “estimable”, which is the effort, not the duration. However, today we know that effort usually represents less than 20% of the total duration due to the many queues spread throughout the system. So, no matter how good your teams are in estimating, the confidence will be predominantly low.

Think about that:

             ➣ Plans are vital for coordinating a strategy execution. Plans require expected dates. 

             ➣ The situation of all things involved will inevitably change. Therefore, expected dates will inevitably change                   accordingly.

             ➣ Plans that reflect current reality are a powerful strategy execution tool.

             ➣ Plans that don't reflect reality can be extremely harmful.

             ➣ Expected dates must always reflect reality to keep plans more useful than harmful, meaning they need to be                  evaluated based on the latest events.

            ➣ Estimation is one of the ways of getting expected dates—usually an expensive and ineffective method and                  often off the mark/inaccurate/out of date.

            ➣ Forecasting is another way of getting expected dates. It’s executed by code in seconds and can be re-run                  on- demand as often as you like, providing leaders with a much higher confidence level in what to expect in                  real-time.

7. Excessive investment in manual reporting or building in-house bespoke reporting solutions

Given that information in Enterprise Technology is usually fragmented across different tools, taxonomies, and workflows, business and tech leaders need to rely on an army of coordinators to obtain meaningful information. That includes:

  • Progress of strategic initiatives and confidence levels
  • Dependencies and risks
  • How the capital is being deployed against the demands of 

                                ◦ New features

                         ◦ Enhancements and optimisations

                         ◦ Defects and incidents

                         ◦ Enablers and technical debt

                         ◦ Risk and compliance.

On top of being an expensive and cumbersome process of obtaining vital signals from product delivery, the information Business and Tech leaders get usually comes delayed and wrapped by the cognitive biases of those preparing the narrative. Just to illustrate, here are some of the biases enveloping your reports:

Thinking, Fast and Slow-System 1 Biases- Cognitive Biases

Another enterprise fallacy is that when leaders get tired of this, they either invest in a Value Stream Management platform or go all in building their own bespoke reporting solution, using something like PowerBi or Tableau. When that’s the case, they usually start with a naive assumption that it is just a project, with the mindset that you build it and have it. Soon they realise they have just created a new product outside of the core business, which needs continuous care. Suddenly, a $1M annual investment is going towards evolving and maintaining that bespoke product every year.

8. Productivity with a low ratio of value-demand

Imagine classifying all work executed by your product and engineering teams over the last year into one of the following three buckets:

Value Demand

  • Work that adds direct and immediate value to your customers
  • Work that your customers would be willing to pay for
  • New features and enhancements that move your product/organisation towards its North Star

Non-value demand

  • Activities that consume resources but bring no value to the end customer
  • Something you do to go faster in the future, have more quality or become more reliable.
  • Secondary needs that arise as a consequence of not meeting the primary needs of customers

Failure demand

  • Work that your customers wouldn’t be willing to pay for
  • They are demands caused by a failure to do something or do something right for the customer. Customers come back, making further demands, unnecessarily consuming the organisation’s resources because the state of the product/service they have received is inappropriate or ineffective. 

Data shows that commonly, out of everything produced by product and engineering teams, just a fraction is classified as “Value Demand,” aligning them to strategic drivers or adding direct and immediate value to customers and stakeholders.

9. Hidden sources of waste and delays

Thanks to Taiichi Ohno and the Toyota Production System, the manufacturing industry learned how to identify and reduce waste and delays long ago.

7 wastes of Lean Manufacturing

However, when it comes to the knowledge-work industry, we are still in the early stages of that journey. While there is a shared understanding of critical watch-outs in manufacturing, I’m not sure that is as clear when producing software products.

Unfortunately, in knowledge work, most sources of waste and delays are hidden, unexplored, or even not perceived as a problem at all. Some of that include:

  • Excessive WIP causing unproductive levels of context switching and interruption to flow.
  • Blockers and impediments
  • Unfinished work going back to the backlog
  • Stale work hanging around along with WIP
  • Undelivered code or features
  • Bounce-backs in the process with the work moving back in the flow
  • An excessive number of queues caused by long phased approaches, with several handovers and a breakdown of activities, where each stage depends on the deliverables of the previous one and corresponds to a specialisation of the process.
  • Producing features that are not tight related to solving the problem your product was designed for and potentially nobody using it
  • Premature commitments 
  • Discard of items after they have been initiated
  • Unnecessary complex algorithms solving simple problems
  • Excessive over-engineered architecture and infrastructure
  • All failure demanded

10. Broken processes with teams continuously operating in a state of urgency

When most of the work getting done has some sort of urgency associated with it, it means your standard process is broken. When that’s the case, work following the standard process gets frequently delayed, stale and discarded after a while. For those which eventually get completed, a disproportionate lead time is expected.

Customers or business areas dealing with Value Streams operating in a state of urgency usually reports having the feeling that they are dealing with an unreliable system that has low predictability and quality.

Tech leaders leading product delivery teams in that scenario usually report a solid feeling of failure. Some describe it as Imposter Syndrome. Others describe receiving punches from all directions and having people energetically telling them that such and such has to get done by X date, no matter what, and for them to make it happen. Finally, some of these leaders try, without much success, to roll up their sleeves acting as individual contributors instead of working on systemic levers.

Individual contributors in that context report low self-esteem and chronic stress. They say, “no matter how hard you work, there's always an unrealistic amount of work you were already expected to have completed”. It feels hopeless, like Sisyphus rolling his boulder uphill. In a desperate attempt to satisfy expectations and fit in, they work 80-100 hours per week, compromising their families and their health (physical and mental).

To finalise, Enterprise Tech eventually loses its very best talents and accepts working with B teams, reinforcing the loss of agility, responsiveness and innovation.

In the end, everyone loses.

Be one step ahead in the Product Development Game. Explore Flomatika.

Schedule a Demo

More

Blogs