Technology & Engineering
Is Production Reliability Becoming a Service Package?
One of the features of “Industry 4.0” is the control of preventative and predictive maintenance by intelligent algorithms. An expert discovers a potential outage scenario with an occurrence probability of just over 40%. The probability of a successful intervention based on predictive maintenance algorithms is high thanks to machine learning. Yet programmers are faced with a fundamental challenge: intelligent algorithms can only learn from historical data and are conditioned on this basis. Only a fraction of the historic data records is of relevance for fault forecasting purposes. In addition to the quality of the historical data set, the determination of current and future indicators for looming outages (e.g., vibration profiles) and the burgeoning data complex from more and different sensors also present challenges. Products and production technologies are changing too rapidly.
Software produced by CGnal provides insights into the quality of work achieved by the algorithms. Its machine learning algorithm can identify 60 per cent of all faults in control units used in hospitals, although the remaining faults are not recognised. In addition, the system generates false alarms in five percent of cases. False alarms and near misses represent important quality criteria. Every intervention, whether too late or in response to a false alarm, costs money. In the “Industry 4.0” concept, evaluation processes are also carried out in sequence, which increases the risk of false alarms. When machines are networked and able to learn from one another, it is not only important to decide how they learn, but also from what source – i.e., from which machine – and, above all, what they learn.
Decentralisation and individualisation in production impede the general applicability of algorithms and, therefore, makes the job of machine learning experts more difficult. One saviour on the horizon could be a typical software trend, which now also extends to machine learning: a higher-level, and therefore, more user-friendly programming language is making access easier for an increasingly wide circle of industrial enterprises. The risk though is that even experts are working in a frontier zone when dealing with AI. If companies now want or need to tweak the software themselves then they will run the risk of false alarms and near misses. The example of Google Flu has demonstrated that whilst a machine learning algorithm configured by experts may well work initially, it can also easily lose its forecasting quality as soon the environment changes.
Industry 4.0 will not work without machine learning. And yet several questions must be asked: how can the forecast quality be ensured in the face of technology and value creation change? What happens when machine learning know-how is democratised? Will machine breakdown policies pay out in cases where algorithms have been modified? Or will insurance from new providers absorb the shortfalls? Will production security and quality consequently be offered as a service instead of predictive maintenance?
0221 3555 34 0
Send details of your query to Daniel Bonin. We will do our best to reply as quickly as possible.
AI and sustainability: a new strategic area of activity
Responsible AI – which rules should be applied to the development of AI applications in future?
Artificial Intelligence as Inventor and Developer
New applications for AI.
Digital Cryptocurrency in Barcelona
Local currencies are proliferating.
Immersive Exosuits with Textile Muscle Fibres
The next steps towards cyborgism.
Agricultural Robotics for Efficient Indoor Farming
Vertical farming for salad heads.
Receive the regular Z_punkt newsletter in your mailbox. Sign up now.