Plant & Works Engineering
Home
Menu
The cost of predictive maintenance scepticism
Published:  08 February, 2021

Industrial equipment operators have increasingly turned to predictive maintenance to prevent catastrophic system failures as well as anticipate and schedule repairs correctly and minimise overall disruption to operations. Overall, predictive maintenance should generate significant cost savings and protect the company’s bottom line. Philipp Wallner* reports.

Yet, some engineering teams are sceptical about the extent to which predictive maintenance delivers measurable benefits. Organisations can find it difficult to determine a clear return on investment. Many are not confident they have the right data, or the right amount of failure data, for a functional algorithm to support predictive maintenance.

Predictive maintenance is often incorrectly seen as a ‘black box’ solution that works as a closed application that receives operational data from machinery and uses an algorithm to predict when it will fail. In fact, the development of algorithms that can detect and predict equipment failings successfully needs to be fed and trained with domain knowledge too.

So why doesn’t this always happen today and how can predictive maintenance change to overcome scepticism? Often those involved in predictive maintenance are data scientists with mathematical backgrounds or engineers who don’t possess data science skills. But the individuals that will be most successful with predictive maintenance will be those in companies that make an active effort to bridge the gap between data science and engineering. By having both sides working together, there is a great opportunity to generate high quality equipment failure data that can better train predictive maintenance algorithms.

A key solution here is how software simulation tools can simplify the processes involved in predictive maintenance and bring both sides together. Such tools are designed to allow engineers less experienced in implementing predictive maintenance to carry out a variety of techniques to collect data and train algorithmic models. To do this, organisations need to understand what failure data looks like. This is a challenge because equipment doesn’t break easily or often, and it is expensive and inefficient to run equipment with the intention to break it for failure data collection. So, an important benefit of using these tools is how they can ensure much less real-world data is required to train algorithms properly.

Software simulation models can be relied on to represent how physical apparatus functions in the field in the widest range of scenarios and conditions. Examples of companies that have used software in this way include:

Safran – the high-tech industrial group used simulation models to train a neural network used to actively monitor and predict anomalies in a hydraulic press. The company leveraged simulation models to create data representing faulty machinery allowing the team to combat the problem posed by a lack of real-world failure data.

Mondi – the packaging and paper goods manufacturer developed a predictive maintenance application with software to identify potential equipment issues. The system was running within a matter of months even though the company lacked data scientists with expertise in machine learning

Baker Hughes – the oil field service organisation applied software to develop a pump health monitoring solution using data analytics for predictive maintenance. The result was that the company reduced equipment downtime costs by up to 40 percent while also decreasing the number of onsite trucks.

All of these cases show there is an opportunity to get data science and engineering teams collaborating to produce enough failure data to train predictive maintenance algorithms sufficiently in a cost-effective way. Software simulation tools simplify the process, make predictive maintenance algorithms more powerful and reduce the amount of data overall needed for them to be properly trained.

Future looks bright

Predictive maintenance is set to greatly evolve through the rest of the decade. At present, most predictive maintenance algorithms are close to equipment onsite like an edge server that collects data locally in a production facility or wind farm.

Over the next three to five years, the calculation power of industrial controllers and edge computing, and use of cloud systems, will rapidly increase. This lays the foundation for better software functionality on production systems.

Predictive maintenance will consider data not just from one machine or site but from multiple factories and across equipment from different vendors. Depending on the requirements, these AIbased algorithms will be deployed on non-real-time platforms in addition to real-time systems such as programmable logic controllers.

Cloud computing will become ever more relevant with predictive maintenance systems feeding data from equipment from anywhere in the world into a cloud platform. Cloud computing will allow manufacturers to collect data from numerous sources to train predictive maintenance algorithms more efficiently. Despite some scepticism over data ownership and security, it won’t be long until cloud-based predictive maintenance becomes a reality.

For engineers that apply predictive maintenance to their production lines, there are a myriad of benefits that can be realised. Engineering teams that remain sceptical and haven’t determined how predictive maintenance can be monetised risk putting their organisations at a competitive disadvantage. Fortunately, there are tools available to simplify processes and combine domain expertise and machine learning, enabling predictive maintenance and its benefits to be feasible for any organisation considering it.

* Philipp Wallner* is industry manager at MathWorks

For more information:

https://uk.mathworks.com/

https://twitter.com/MATLAB

https://www.linkedin.com/company/the-mathworks_2/

https://www.facebook.com/MATLAB