Preventing Plant Disasters with Data Management

Marc Laplante
Tags: CMMS and EAM, green manufacturing, IIoT, business management

Preventing Plant Disasters with Data Management

Asset failure is a disruptive event for any industrial organization. According to LNS Research, asset failures are one of the top three causes of accidents that end in safety issues and pollution. They also halt production and cause equipment damage, resulting in significant financial losses. In fact, each year, unplanned downtime costs industrial manufacturers an estimated $50 billion.

Organizations are under pressure to increase efficiency and reduce consumption, and reliability managers and executives must account for each asset that plays a role in plant operations – from the smallest valve to the largest turbine.

While software can run analytics and generate basic performance metrics, the knowledge that a well-protected $10 million turbine is 98% reliable isn’t useful when it’s a small valve that fails and shuts down the entire plant.

Three key drivers of machine degradation are:

  1. All machines face multiple sources of degradation.
  2. This includes chemicals, fatigue, abrasion, and friction.
  3. The rate of these degradation mechanisms varies depending on machine design, usage, and environment.
  4. Machines are complex, consisting of various components, and the different sources of degradation affect those components at different rates.

The Early Journey: From Preventive to Proactive

Preventive maintenance (PM) emerged to detect and prevent degradation before failure occurred. However, the complexity and interaction of the three key degradation drivers complicated the decision of when and where to intervene. Instead of establishing PM schedules based on educated assumptions, time-based PM became the best option, and was followed soon by condition-based maintenance.

Various predictive maintenance (PdM) technologies have been developed, such as:

However, the “predictive maintenance” name is a bit of a misnomer. These technologies don’t predict failure; they detect and reveal signs of deterioration so maintenance teams can intercept them before failure occurs. These methods focus on different failure modes and are built around sensor technologies that detect them and produce critical data.

Without data insights from connected sensor technology, plant operators have an insufficient understanding of the organization’s risks and how to manage them. The data needed to make decisions may be limited to a specific asset or facility. But the data becomes useful when compared across the entire company and global averages, presenting a competitive advantage for companies that understand the risks and know how to control them.

Embracing Data and New Intelligence

With the adoption of data-producing sensors, organizations are implementing management systems to interpret massive volumes of data and run advanced pattern recognition tasks to detect equipment anomalies and degradation.

Management system detection techniques are practical; they analyze sensor data and build a model of a “normal” operation and alert when abnormal conditions occur. This happens in real-time and can detect subtle variations caused by deterioration. With these technologies, organizations are seeing a return on investment by significantly reducing catastrophic failures on monitored equipment.

Did You Know?

"Harnessing big data analytics can reduce breakdowns by up to 26% and cut unscheduled downtime by nearly 25%."
Source: engineering.com

Technology has also transformed data mining to optimize insights from archived sensor data and enterprise asset management system data to assist with work execution. Combining data mining with anomaly detection improves real-time diagnostics and time-to-failure forecasts. This is where asset performance management (APM) systems come into play.

Most machine learning (ML) and artificial intelligence (AI) techniques are data-driven but aren’t designed for extensive data analysis; APM codes the data that ML and AI can’t by using an ML data integration function to collect billions of data points and quickly organize them into models that measure risk and prevent failures.

In today’s industry, organizations are recognizing that connected assets need to feed information into an APM system to appropriately utilize the collected data. For example, a large chemical company in Saudi Arabia deployed an APM system and improved the average failure rate of their pipes from 172 days to more than 2,100 days, representing a 1,135% improvement.

Sharing Data for Better Prognostics

For organizations with end-to-end IoT (Internet of Things) environments, big data analytics can’t focus on just a few data sources. APM enables organizations to combine data silos and model the unique nature of industrial assets within their operational context.

This is one area where there are significant differences between the industrial and consumer sectors. In the industrial world, failures can be highly varied. Since there is currently no industrial equivalent to Google or Amazon to combine machine data across enterprises, the data pools needed to develop these kinds of analytics are limited to large enterprises and original equipment manufacturers (OEM). While companies are sensitive about their operational data, many are beginning to understand that sharing their fault and failure data with others is extremely beneficial for the whole industry.

With this pool of data, the next wave of data analytics has immense potential. By analyzing this data, emerging fault patterns can be matched and compared to historical data from a “library” of previous similar cases. With this, automated diagnostics can provide a description of the problem and a forecast for the potential time-to-failure.

Even for equipment not outfitted with sensors, larger data pools support better statistical analysis based on equipment in similar operating conditions. This allows engineers and operators to make more informed decisions when establishing a maintenance strategy because they’ll understand the true component failure rate. Current technology typically relies on OEM recommendations or industry studies that were often conducted years ago.

Conclusion

For many companies, maintenance strategy development is a subjective, experience-driven process. The data to make objective decisions is often sparse, doesn’t exist, or is difficult to access. Moving to condition-based approaches resolves much of this issue by basing activities on the current condition of an asset, but even these techniques still require significant expertise and leave room for improvement.

The potential for applying advanced data analysis to machine operations is promising, but there are still challenges. It’s critical to have access to the right kind of data, and for many companies, this may mean being willing to share and trade data with other companies. As companies start sharing information and improving operations, they will realize the benefits outweigh the concerns of helping competitors. 

Organizations understand that the direct cost of downtime is detrimental to business. In many cases, the indirect costs of this downtime, such as a damaged reputation, are equally, if not more disruptive than the direct costs. Industrial operators must embrace a big data strategy that provides the best outcome for their assets if they wish to maintain profitability and growth. By identifying failure trends and characteristics early with data, industrial organizations improve overall asset reliability and cut costs, both short and long term.