Predictive Analytics: It Is Not Just About Maintenance

R. Keith Mobley, Shoreline AI; Mark Stubbs, Shoreline AI
Tags: predictive maintenance, continuous improvement, IIoT

Are you getting real benefits from your predictive analytics program? Most predictive analytics programs are replacements for vibration-based predictive maintenance programs focused on the singular mission of failure prevention. Since the inception of microprocessor-based data collectors in 1980, less than three percent (3%) of these programs have resulted in verifiable savings that offset their recurring cost.

Overall, these programs have touted a reduction in unplanned downtime, but in all cases increased the planned downtime required to prevent the perceived pending failures. Most of these programs have increased overall maintenance downtime, as well as overall maintenance labor and materials costs. While the appearance of benefits may be present, these programs have proven to be counterproductive. Not because of technological limitations, but because of improper use of these technologies.
Three major factors have and are limiting the benefits that predictive analytics could provide, which are: 
Predictive analytics is not only maintenance, it is not a mere replacement for predictive maintenance, either. It has no limitations. Predictive analytics is applicable to any recurring activity — whether it is a physical asset, a production system, or the finance department in your organization. In this article, we will limit the discussion to physical assets and how predictive analytics, predicated on the operating dynamics, can enable the ability to gain and sustain optimal performance from your assets.
If you want to get optimal performance, reliability, and economic useful life from your assets, join us for this enlightening approach to true predictive analytics that works. Instead of a singular focus on failure prevention, a focus on sustaining the value stream and critical auxiliary assets at their design or optimal operation condition will not only reduce failures, but will at the same time prolong their economic useful life and reduce the organizations total cost of ownership. This is the only effective way to gain optimal return on invested capital, as well as revenue generation.


One common factor driving the failure of these legacy programs is their fixation on the failure modes of capital assets, instead of the causal factors behind them. A simple example is identifying a bad bearing and a corrective action to replace the bearing. But without asking the obvious question of what caused the bearing to fail is a self-fulling prophecy, one destined to fail.
Even if physical failures were the primary reason driving downtime and high maintenance cost, this approach simply cannot resolve the problem. Until your focus is on the underlying causal factors that reduce reliability, economic useful life, and thereby, increase operating costs and maintenance capital expenditures, you doom the predictive-analytics program to abject failure. 
One example of a failure-based approach was a large, integrated steel mill that implemented a contract predictive-maintenance program for the mill. Before the program began, unscheduled downtime and high maintenance costs plagued the mill. After six years into the program, they reported unscheduled downtime reduced by 35%.
A successful result, right? Not when you look at the real change over those six years. True, their unscheduled downtime was lower, but their planned downtime — to replace the perceived bad bearings, gears, and other wear parts — increased by 65%.
The other notable change was in their year-over-year maintenance cost. Total labor and material costs increased by more than 80%. Cost for replacement bearings increased from $2.4 million to $14.7 million, gears and other wear parts followed similar patterns. 
Failures are not the norm. Assets designed to be dependable, consistently operated within design limits and receive adequate sustaining maintenance, will remain dependable well beyond their design life. The problem with failure-driven predictive analytics is the not acknowledging that how we operate and maintain assets can become a self-fulling prophecy. We create accelerated wear, induce abnormal operating conditions that accelerate wear, and then defer sustaining maintenance that would at least mitigate the damage.

The Solution

Resolving the predictive-analytics limitations is not that difficult, at least from a technical point of view. Classic predictive technologies are not a limitation. When effectively used, they will provide the means to achieve positive results.
The steel mill is a good example. When their program shifted from failure-driven to true predictive analytics the change was almost immediate. In less than a year, maintenance materials costs dropped to less than $2 million.
Using the bearing as an example, the new program focused on the causal factors behind the reported bearing failures and implemented corrective actions to remove them. Elimination of the causal factor immediately eliminated the chronic premature failure that drove the cost up and replacement costs plummeted.
In the second year, bearings and other wear parts cost dropped even further. A 60% reduction in maintenance costs led the mill to consistently produce at a rate 30% higher than before the focus change. 
Success of your predictive analytics program must consider the operating dynamics of the assets, systems and processes that make up the plant. It must consider the inherent design limitations, modes of operation and level of sustaining maintenance that define their dynamics. 
Another example of the difference between failure-driven vs. true predictive analytics involves seven hundred slurry pumps in a refinery. The refinery had a well-established predictive maintenance program using portable data collectors. Technicians dutifully walked their routes daily and the system reported when each of these pumps required maintenance to prevent imminent failure.
Over time, the cost associated with pump rebuilds grew to more than $10 million annually. On the books, the program was working and there was little reported downtime caused by pump failures. 
When the true operating dynamics predictive analytics program replaced the predictive maintenance program, results changed dramatically. Because the new program looked for causal factors instead of stopping with failure modes, it became apparent that the reason that 11% of the pumps required major repairs annually was their mode of operation.
Remotely controlled discharge valves controlled each pump. The analytics recognized that the control range was forcing the pumps to operate well outside best practice recommendations. The resulting instability caused accelerated wear and severe damage to the rotating assembly and casing.
To correct the problem, the client changed the operating parameters to limit the control range to +/- 10% of BEP, leading the annual repair cost to drop to less than $1 million. One other benefit of predictive analytics was that it recognized the impact that the old control range had on power consumption.
Instead of the 160-brake horsepower at BEP, the pumps were drawing an average of almost 300 Hp. The difference in annual power consumption was more than $7 million. In this one application, predictive analytics reduced year-over-year costs by more than $16 million USD

How It Works

Applying predictive analytics to asset management is not that complicated; you just must think logically and clearly determine the reliability and sustainability requirements of the assets in your organization. 
The following steps define the process:

Determine Each Asset’s Inherent Reliability

Reliability is determined by design. All activities after design must sustain that inherent reliability to gain optimal return on investment. This first crucial step determines not only the inherent weaknesses of each asset or system, but also the mode of operation and maintenance required to sustain inherent reliability and achieve optimal economic useful life of each asset. 

Physics of Failure

Clearly define all the failure modes and their causal factors for each asset or system. This must be more than a simple FMEA or lists of perceived failures. It must consider all deviations from best practices, such as the impact of various modes of operation — production and maintenance. Remember, only 17% of asset failures result from improper maintenance; the remaining 83% are the result of deficiencies within operations.
Understanding failures is important, but understanding the causal factors or forcing functions that result in that failure is crucial. If you know the failure mode, you might be able to anticipate it and be able to recover quickly — but that does nothing to improve reliability or prevent a recurrence. Causal factors provide the knowledge required to prevent the initial and all recurrences of a failure.

What Parameters Identify Failure Modes and Causal Factors

Once you have an in-depth understanding of inherent reliability, failure modes and their causal factors, the next step is determining specific parameters, such as vibration or heat distribution, needed as input to a predictive analytics engine. Predictive analytics, like any other form of diagnostics, is dependent on the quality and completeness of the input data.
For example, input of high-resolution broadband and discrete narrowbands of vibration data is sufficient for effective analytics of a pump’s mechanical condition but may not be enough to determine the causal factors that could provide early detection and correction of deviations that could, left unknown, result in a failure. In most cases, these parameters will be a combination of process data extracted from existing monitoring and control systems and directly measured data that is integral to the predictive-analytics engine. On dynamic assets and systems, the latter includes the use of smart sensors that incorporate edge analytics, machine learning, and artificial intelligence strategically located on the asset, process, or system.

Anomaly Detection Model

Combining the knowledge gained so far in this discussion, the last step in effective predictive analytics is developing an operating dynamics or physics-base model that can ingest continual data from each asset, system, or process, and also automatically analyze all the variables, identify all deviations from normal, identify causal factors behind each deviation, and generate prescriptive instructions for corrective actions.Obviously, the ODA model is the key to effective predictive analytics. Any experienced reliability engineer should be able to evaluate a specific asset at a point in its life cycle and do the same thing.
The difference is that there are not enough qualified reliability engineers or hours in the day to continually analyze all assets. Predictive analytics engines do not get tired or bored or distracted.
In conclusion, the true efficacy of predictive analytics programs lies not in their mere existence but in their strategic implementation. The prevalent focus on failure prevention, though well-intentioned, often falls short due to its inability to address underlying causal factors. Shifting towards a holistic approach that encompasses asset dynamics and operational intricacies yields tangible benefits, as evidenced by successful transitions from failure-driven to true predictive-analytics paradigms. 
By acknowledging the critical importance of sustaining the value stream and auxiliary assets, organizations can not only mitigate failures but also optimize performance and reduce total cost of ownership. Embracing predictive analytics as a tool for enhancing reliability, extending economic useful life, and minimizing operating costs signifies a paradigm shift towards proactive asset management strategies, ensuring optimal returns on invested capital and sustained revenue generation.