Subscribe to Blog Updates

Making sense of edge computing

According to Google Trends data, worldwide searches for ‘edge computing’ have increased tenfold in the last five years. Google boasts 340 million search results for the phrase, hardly simple revision for those curious about the technology. Among the noise and the varying definitions of edge computing the technology has become somewhat misunderstood. Here, I examine the role of edge computing in smart factories.

    

Edge computing describes a distributed version of computation that brings data analysis closer to the source of data. In a factory setting, this could see data processing take place at the machine level. Unlike centralised models, where information would be sent to a data centre or the cloud, edge computing allows data capture, analysis, and action to be performed on the edge of a process, hence the name.

Despite its capabilities, edge computing is not a replacement for centralised data storage methods, or an alternative to other data processing and management technologies. In fact, these architectures must work together to be truly beneficial.

Moreover, edge computing isn’t an entirely new concept. In the context of the Industrial Internet of Things (IIoT), factories have been using systems that work closely with the data source for a long time. For instance, the use of Supervisory Control and Data Acquisition (SCADA) systems through programmable logic controllers (PLCs) and sensors.

So, are edge devices just a new brand of the same technology? Not exactly.

 

Latency reduction

Edge devices are unique in the sense that they provide first-stage processing of data before sending this information elsewhere. Edge devices can also act on this data within the realms of the device itself, thanks to their intelligent capabilities. This can be achieved using forms of artificial intelligence (AI) and machine learning to help in decision making.

Because everything is taking place on the device, this method can significantly reduce latency, removing the time spent between sharing this information with distant data centres and awaiting feedback.

In practice, this could help a manufacturer to avoid critical failures and downtime. In an oil and gas application, for example, an edge device could detect dangerously high pressure in pipes. Rather than waiting for this data to be processed elsewhere and sent back to a site manager, the device could trigger instant shut offs or adaptations to avoid a disaster. Similarly, this same method can be used to make automated adjustments to a process to improve the outcome — this could be related to energy efficiency, accuracy or productivity.

The ability to effect change based on real-time data does exist in current software platforms. COPA-DATA’s zenon, for instance, can be deployed across an entire facility to monitor operations. Compatible with most communication protocols, the software can pull data from a variety of equipment, sensors and vertical systems to provide operators with a real-time dashboard of facility-wide insights. Like our aforementioned example, this can alert users to disruptions in production and highlight potential problems.

With software like this, edge computing certainly isn’t necessary for every single process. However, deploying edge devices in some areas of production can be valuable. On-edge computing can significantly reduce the volume of data being sent to these broader software platforms, reducing the bandwidth required and improving reaction times for the process in question. For critical processes, edge is the ideal next step in digital transformation.

 

Streamlining data

IIoT technologies have resulted in a huge increase in data across the industry. Today, it is not unusual for manufacturers to produce data on everything from energy efficiency and productivity, right through to operational insights and predictive maintenance. In fact, research suggests that the average smart factory produces five petabytes of data every week — that’s five million gigabytes, or the equivalent of more than 300,000 16 gigabyte iPhones.

Manufacturing’s big data has quickly become colossal, and edge computing provides a way to reduce the volume of data being sent to a centralised space.

For industries that rely on data integrity for compliance, deploying edge computing to manage some data analysis can become a vital part of a data management strategy. Pharmaceutical manufacturers, for example, must comply with the Food and Drug Administration (FDA) 21 CFR part 11 regulation. This standard applies to drug manufacturers and bio tech companies and requires these organisations to keep an accurate audit trail and electronic records. EU GMP Annex 11 is the European equivalent.

In these industries, on-edge analysis of some data can reduce the volume of information being sent to the cloud or data centre. Crucially, this ensures that time sensitive data is not lost in the flood of information.

The process of fill and finishing vials is a good example of how this could be beneficial. This process typically consists of a conveyor working alongside robotic equipment to dispense liquid into vials. This is commonly used for the filling of vaccines and syringes and must be done quickly and accurately, while also ensuring volumetric consistency. In addition to filling, vials are often washed and heat sterilised before they are reused. Air pressure is also controlled to stop contaminants floating in and out of the vials. As you can imagine, there are masses of data in this operation alone.

Analysing this data on-edge allows this equipment to react instantaneously to potential problems. In an application like this, it could remove the possibility of a contaminated batch and save significant time and costs for a pharmaceutical manufacturer.

Simultaneously, at the central data store, an auditor can expect a reduced volume of records to sift through when compiling evidence for FDA CFR part 11 compliance.

 

Scaling the edge

While some processes do benefit from instant data analysis, smart factories cannot work in silos. The rise of the edge does not mark the downfall of other data management technologies. In fact, it reinforces their necessity.

Software platforms that can communicate with edge devices are essential for making edge technology scalable. Moreover, platforms that can collect, analyse, and visualise data from the edge — while compiling this with a variety of other types of equipment — are essential for constructing a holistic view of a factory’s operations.

Realistically, most manufacturing facilities are not in a position for the widespread deployment of edge devices, or edge platforms to converge these technologies. Instead, manufacturers need scalable options in their journey to digitalisation — and independent software can be the glue that makes this possible.