Auf einen Blick
- Executives who invest in big data innovations to improve the reliability of their manufacturing facilities are often disappointed by the returns.
- Successful asset health programs focus not just on the technology, but also on high-impact problems, root causes, frontline expertise and external partnership risk.
- The litmus test for leaders is scalability, not whether a solution is state of the art.
For executives at industrial companies, this should be a golden age ushered in by predictive analytics and advanced systems. Digital technology has outsize potential to improve the running of manufacturing facilities, raising the quality of their output as well as cutting costs. So why has the revolution stalled?
Take the vital task of optimizing the health of a plant’s assets so that they run reliably. Manufacturers, chemicals groups, utilities and others are spending heavily on digital tools in this area. From simple pumps to complex robots, they are collecting and analyzing greater volumes of asset health data in a bid to streamline maintenance and cut costly downtime.
Executives need the gains from this branch of industrial analytics to counter a slowdown in productivity growth. Across G7 economies, growth in real output per worker fell from 12% in 2000 to 3% in 2016 (see Figure 1).
Despite an abundance of capital, global productivity growth is stalling
Yet many executives who sought a boost from asset health are not getting the expected return on investment from their sensors, datasets and dashboards. Maintenance might still be a rolling firefight for some; others might have missed out on the potential value of the latest technology when a prototype solution failed to scale.
These doomed asset health initiatives have tended to blow up in five ways, each leaving a different battle scar on the leaders that nurtured them. Some executives deploy technology for technology’s sake. Others come undone when frontline workers reject their new tools. A third group gives up too much control to the technology vendor, while a fourth gets burned because their solutions have a one-size-fits-all, high-tech bias. The fifth failure mode will be familiar to many: getting trapped in endless, small-scale piloting and experimentation.
Through their various frustrations, these war stories point to a better approach to asset health. What links them is an excessive focus on technology—understandable, given the pace of innovation in analytics. Yet some leaders are already getting results with a more pragmatic, business-focused perspective. They recognize that asset health is about solving persistent, high-value business problems, not just the tech. And they’ve found ways to avoid all five types of battle scars.
Identifying problems with real impact
The right business problem might not involve the thing that keeps bosses awake at night; it might not match the solution touted by a technology provider, either. Successful project leaders remember this when they frame asset health problems and calculate the value of a solution. They may take vendor advice with a grain of salt, but they also question their own assumptions.
These executives draw on a deep understanding of production pain points, downtime, maintenance costs and schedules, sector benchmarks and key business processes. They distinguish between the obvious symptoms of a problem (a series of breakdowns caused by a faulty pump, say) and the root causes (which might be inflexible work scheduling or a lack of information on the pump’s failure curve). They focus their asset health plan on these root causes, not the symptoms.
One petrochemicals company followed this playbook when it tackled its heavy production losses and increasing maintenance costs. Initially, it identified one group of assets as a high-impact area. Their reliability was 3 percentage points lower than the norm; the frequent breakdowns forced the company to buy in energy expensively, while pushing up maintenance costs.
It then created a monitoring solution that distilled root causes of failure into key metrics. Within three months, this identified multiple subcomponents about to fail. Managers avoided this eventuality through preventive maintenance, saving enough money to cover the costs of the project up to that point. They then went on to target benefits equivalent to 13% of the addressable costs associated with the asset group.
Crucially, the project’s leaders did not choose technology that was too advanced for the plant’s hitherto patchy and reactive approach to asset health. They picked tools that stretched its capabilities, but not to the breaking point—a common theme across asset health success stories.
Deploying and engaging key staff for mainstream adoption
Experienced engineers make asset health solutions work better. They know where the high-impact problems lie and their lucid questioning demands the attention of the full team, sharpening the focus of the project. However, promotions, transfers and retirements can make such engineers scarce, eroding the communal pool of knowledge for any given asset. Successful asset health leaders still find a way to include these experienced voices, and not just when framing the issue. This involvement codifies institutional knowledge in the new digital tools, making the company more resilient to future personnel changes.
Frontline staff can act as an anchor, preventing their employer from drifting on the latest technological tide. For instance, the petrochemicals company that was tackling heavy production losses ran two asset health proofs of concept. One, an analytical engine from a cutting-edge disruptive vendor, was off the shelf—and off beam. The second was led by an in-house engineer, who saw that the technology needed to work, not shine. It nailed a high-impact problem, paying for itself in less than a year.
Another benefit of involving existing teams prominently: You create evangelists for mainstream adoption when it comes to scaling up a solution. This “sponsorship spine” is vital to avoiding frontline rejection. But people only use tools that they trust—and they only trust them when they understand what is going on under the hood.
The best way to building that understanding varies according to the preferred solution. A fully custom-designed tool? Some executives have found that involving frontline operators in the design team works well in this situation. For off-the-shelf solutions, however, it makes sense to ask frontline operators to help define the specification instead, while also giving them a say in the ultimate choice of tool and how it is customized.
At one utility company that involved frontline staff and key engineers in the development of an advanced analytics tool, the solution evolved from a narrow focus on predictive asset maintenance to become a broader mobile application that also integrated safety and dispatch information. The front line used the tool to track outages, and dispatchers used it to ensure the right people got to the outage at the right time—saving foremen about 6,000 hours a year and improving crew safety.
Choosing the right vendor—while retaining your independence
Industrial groups often worry that new technology will not play nicely with their existing systems. But their fears about outside vendors of analytics solutions go beyond this single concern.
They know that one external asset health partner can rapidly become two, three, four or five. Before long, vendor proliferation causes costs to rise as their negotiating muscle withers and the additional meetings mount up. All the while, they struggle to gain a single overview from their disparate analytics tools (unless they recruit yet another partner to build a dashboard for their dashboards).
That’s not the real nightmare scenario for executives, though. The worst case is being tied into a platform that siphons off valuable data into a black box—and then shuts them out of their own insights if they have the temerity to end the subscription. And that’s assuming ending the subscription is even an option: The layers of applications that often get built on top of an analytics platform can make withdrawal a practical impossibility.
The more successful industrial analytics customers guard against this by starting with a view that their data is an asset to be developed, monetized and guarded. They structure vendor contracts to minimize lock-in risk, while creating roles focused on smarter industrial analytics buying. They create flexible platforms that can support multiple options for their applications. These could be fully fledged applications from a partner vendor, APIs to third-party “plug and play” functionality, or applications built by an in-house data science team.
A large oil and gas operator followed this playbook when it developed its own in-house industrial analytics applications to optimize yield, improve asset health, eliminate bottlenecks and secure other gains. It pieced together a combination of components from different vendors to retain flexibility, while also structuring its IT architecture to support multiple APIs and link seamlessly with existing operational systems at each of its locations.
To reduce unplanned downtime and lower maintenance costs, another midsize oil and gas player ran pilots involving two top OEM vendors concurrently, with the aim of giving the strongest performer an exclusive contract. It also partnered with several pure-play analytics firms to assess further options. This diverse vendor engagement allowed the company to reflect more deeply on existing capabilities and gaps, and become a “smarter buyer” of asset health services.
Tailoring analytics to the reality of your plant, not the purist view
Prediction and prescription are the buzzwords in industrial analytics. But what if your infrastructure is not ready for complex solutions that aim to predict problems before they happen? What if you’ll get satisfactory payback from a simpler improvement, such as a shift from reactive maintenance to real-time monitoring of assets and processes?
Many executives can feel pressured into a purist overhaul that reinvents an outdated plant from the foundations up so that cutting-edge tools can be deployed. That can yield low returns. Over time, the attention of company leaders can also wander to easier wins. The alternative is to let asset health ambitions waste away and just maintain the unhappy status quo—or at least that’s how it can feel.
The choice does not have to be this binary. By using existing sensors and data more shrewdly, for instance, managers of less advanced plants can gain extra insight into when assets will fail. Smaller steps such as these can also build capacity for bigger asset health leaps.
Executives can use a pyramid scale to understand their company’s current asset health “maturity,” from the least mature “break-fix” state at the bottom to a state-of-the-art pinnacle of model-automated “prescriptive maintenance” (see Figure 2).
Companies that do asset health well move each of their assets up the maturity pyramid to maximize returns
To work out whether it makes financial sense to move up the maturity pyramid, managers weigh a range of factors, including how widely sensors have been rolled out, how much sensor investment is still required and the time it will take to generate actionable data. They also consider the sums needed to fund the necessary personnel, including data scientists.
Industrial groups can reap substantial gains from moving up this maturity pyramid. A typical asset health transformation that progresses as far up as predictive maintenance can deliver a 70% to 75% reduction in the frequency of breakdowns, while cutting downtime by 35% to 45% and maintenance costs by 25% to 30%.
But it isn’t necessary to move up to prescriptive or predictive maintenance to generate returns on investment. For instance, a North American mining company with low data and maintenance maturity started its path toward prediction with a more basic asset health pilot. This work made key asset health alerts more automated, while also generating data that will become the backbone for a future push into predictive maintenance.
The scale challenge: Thinking big while still small
Leadership teams can do all of the above right, and still fall short when the solution, the supporting systems and the capabilities can’t scale up. This is why scaling remains the biggest preoccupation for managers at the industrial analytics front line.
It is easy to prove a concept when extraordinary talent and financial resources are concentrated on a topic. To reach scale, however, that concept has to work with all teams as well the A-team—and in normal funding conditions. Executives running a proof of concept for a single asset or area are also handling just one set of systems, technologies, failure mechanisms and people. A broader rollout has to be coordinated across a much more disparate landscape, obliging executives to make big investments over years.
That requires more standardized business processes and data structures. One vendor often becomes several, further complicating the situation. Specialist technical personnel also become scarce, as do the resources needed to support them.
In Bain research, industrial groups reported that they were becoming more realistic about the challenges of scaling across the broader Internet of Things, not just asset health projects (see Figure 3).
Industrials are becoming more realistic about the challenges of scaling
Companies with a strong implementation record have factored scaling into the earliest decisions on an asset health project. They ask themselves if the overall program they are creating is repeatable. But they also ask which program elements (data ingestion, say) are repeatable and could be accelerated to yield a faster solution.
Staffing choices can also tip the odds in your favor. Successful industrial groups often use psychometric testing to identify the strongest project leaders. They also staff early-stage teams with a cross section of workers from areas that would be affected by a full rollout. If prototyping a pumps solution, for instance, they include people from fans and motors, too.
A large chemicals group reaped the benefits of this approach when it explored the scalability of a set of new industrial analytics use cases designed to support its overarching digital strategy. By involving a broad cross section of workers and asking finely calibrated questions about the scalability of the prototype and associated processes, it identified a larger-than-expected gap in digital maturity between sites, and adjusted its rollout accordingly.
The sign that the revolution is back on track
Computing power, sensors and data storage are becoming ever cheaper. Internet connectivity and data analysis techniques are improving rapidly. Both of these trends mean that the momentum behind asset health initiatives will only increase.
As the technology becomes more ubiquitous—even commoditized—it is vital for industrial groups to deploy it now in a way that will develop their internal capabilities in asset health and other branches of analytics.
Scaling can be a great measure of how far they have progressed on this journey: Leaders can be confident that their capabilities are maturing when their teams are moving smoothly from proof of concept to widespread rollout. If they can see that happening—well, then the revolution might be back on.
Joachim Breidenthal and Edel O’Sullivan are partners in Bain & Company’s Energy & Natural Resources practice. Joachim is based in Johannesburg and Edel in Washington, DC.
The authors would like to thank Anna-Marie du Plooy and Daan Kakebeeke for their contributions to this brief.