Solving big data problems with edge controller technology by Dieter Gebert and Luigi Ballerio, application engineers for machine automation solutions at Emerson
Manufacturers keen to exploit the full potential of their plants are increasingly looking at how 'big data' can help them create operational efficiencies. Big data, defined as extremely complex and large data sets, presents a number of opportunities for organisations, in supporting predictive maintenance strategies, as well as in identifying issues, bottlenecks and areas of underperformance. The actionable information provided to managers and executives via big data enables them to implement new processes and work practices to increase safety, improve production efficiency, reduce downtime and lower operating costs.
However, large and complicated data sets, accumulating at an incredibly fast rate, require the implementation of new advanced technology to gather the data, perform the analysis, and then present useful insights that can subsequently be acted upon. This is where the Industrial Internet of Things (IIoT) has entered the fray. The IIoT makes use of a vast array of existing and new sensor technology monitoring devices, equipment, machines, production lines and processes, connected via a variety of data communications networks to high-performance computer processing and analysis software to interpret and then present this data.
Implementing technology to process and analyse big data can be a major problem when companies try to tackle it all at once. Far too many organisations are tackling big data with unaffordable, impractical, mega-scale projects. A better approach may be to start smaller and concentrate on known problems with defined parameters, which could be described as `little data?. Focusing the field of view to a specific asset reduces complexity and simplifies the search for a solution. In most industries, this means starting at the machine or production line level and one of the main technologies for creating value from this little data is edge computing. Data produced by field devices is analysed by a field-located controller to generate insights. This information can be supplied to the right personnel, close to the source for fast, informed action.
Edge computing is essentially a distributed computing paradigm that brings computing processing and data storage closer to the location where it is needed, to improve response times and save bandwidth.
As devices get smarter, they produce more analytics to generate insights into equipment health and performance. Edge computing technology is doing this at or near the source of the data, instead of relying on the cloud and the computing power within data centres. With the latest edge controllers, embedded processing brings those insights closer to the plant floor, whilst also making them more widely available via the cloud.
Edge computing, as realised by true edge control technology, makes IIoT a reality for every plant and enterprise today. Easy to integrate into an existing plant without having to start again, it is enabling manufacturers to embrace the benefits of IIoT, solve key problems simply and affordably, and then scale up.
OEMs and manufacturers can use edge computing to evaluate equipment breakdowns and eliminate common issues. The technology can be used to provide feedback to development teams on machine performance to help them optimise future products. It can be used to answer questions such as how is the machine actually used, what quality issues are there, and can costs be reduced without affecting performance. Comparisons can be made between machines, processes and entire plant, as well as raw materials in terms of yield, quality and scrap. Machine use can be tracked, as can energy use, startups and changeovers to help optimise performance and ensure compliance with safety and environmental requirements.
The latest edge controllers, such as the PACSystems RX3i Edge Controller from Emerson, offer both deterministic and non-deterministic control in a single compact device. These devices effectively have `two sides of a brain?, with the left side being where intelligent sensor data is gathered and real-time deterministic control is provided. The right side, meanwhile, has a software stack running on Linux to deliver the data processing and analytics, dashboards, data logging, and remote monitoring and diagnostics.
Imagine a single control unit, technically an industrial PC with multicore architecture, then separate it into two brains. The left side performs the typical functions of a controller, including reading inputs, executing logic in real-time and writing outputs. This all takes place in the programming environment typical of a PLC, with an I/O network, redundancy possible between controllers, and connection with HMI, SCADA and DCS. In addition, the left side is also able to prevent any problems the right side might incurr from affecting the control functions.
The right side, which runs a Linux-based open operating system, has the ability to manage multiple loops and routines locally, collect data and interface with standard programmes for the IT world, such as Python or Java. The presence of a webserver and secure communication protocols to the cloud such as MQTT. Above all, it has the ability to process locally a large amount of data, through algorithms, available on the controller and possible external optimisation. For the first time in automation, two worlds are truly connected and interacting: the data on the left side providing the basis for the processing on the right side. The automatic optimisation algorithms - possibly connected to a cloud - produce results that also serve to further optimise the control part on the left side. Exchanging data between PLC and SCADA, running optimisation routines and using calculation results to improve control parameters were already possible, but with the latest edge controllers this can be done much more and much more quickly to improve the control logic of the machine or process itself.
Anomaly detection at the edge
A typical application is anomaly detection at the edge. This requires historical data from a database such as InfluxDB or SQL Lite, and a machine learning algorithm created in Python or other tools like Promethius. A selected data sample is taken and cleaned, removing all outliers, and then the ML programme is trained. The test dataset can then be applied, and then eventually the live data from the machine and relevant instruments. The ML algorithm can then spot an anomaly were it to occur, record it or raise an alarm for an equipment operator. This helps issues to be identified before they become real problems, and a shutdown of the machine or operation can be scheduled, necessary parts ordered and downtime minimised, ultimately reducing the cost impact.
The need for digital transformation is now much more apparent, with business owners able to see the return on investments from IIoT supported by edge computing. IIoT allows companies to access, analyse, and historise previously isolated data that is critical to operational improvement. Edge controllers provide an affordable, manageable way to bring the IIoT down to the machine edge to enable organisations to begin solving big data problems one step at a time. With computing now faster, smaller and cheaper than ever and data transfer widespread, with costs reducing, it is now easy to implement little data projects, along with ways to make mission-critical capabilities like remote monitoring a reality.
To know more visit here