SCADA and Edge Computing

SCADA systems are not well suited operating critical applications in the cloud. But they could provide useful information to cloud systems while also bringing more intelligent resources to local users.

Wikipedia defines edge computing as “a method of optimizing applications or cloud computing systems by taking some portion of an application, its data or services away from one or more central nodes (the ‘core’) to the other logical extreme (the ‘edge’) of the Internet, which makes contact with the physical world or end users.”

In the context of the Industrial Internet of Things (IIoT), we have seen systems such as supervisory control and data acquisition (SCADA) always working closely to the data source, such as programmable logic controllers (PLCs) and sensors. But with so much data being collected, SCADA is usually an important data source for other enterprise systems as well, including historians or business intelligence applications in the cloud.

In the cloud, there is great availability of computational power using Hadoop clusters and GPUs for a fair price, with the company paying only when it needs the power instead of buying and maintaining a large infrastructure. But industrial systems cannot simply move to the cloud. They usually require low latency and extremely high availability, so for many critical cases we will not see SCADA moving to the cloud anytime soon.

But to evolve, how might SCADA work as a state-of-the-art edge computing system, providing useful information to cloud systems while also bringing more intelligent resources to local users?

There are several different architectures that could help with this scenario, typically relying on open source software, cheap hardware, low cloud costs and a significant amount of customization. One architecture, for example, would use the SCADA system as a source of information for a machine learning infrastructure in the cloud, such as TensorFlow. Time-series data along with alarms and events could be used to train a model to detect possible failures. The training occurs in the cloud, using clusters and GPUs, but the detection does not need to be in the cloud. If the detection leads to a SCADA alarm, you might not want to depend on the cloud.

Once you have a trained model, the processing can go back to the edge. A SCADA system or other hardware working close to it can run a local TensorFlow model to detect known patterns that can lead to a failure on the asset being monitored. This can be done in hardware as simple as a Raspberry Pi 3 or other fanless computer with ARM processor. Once a possible failure is detected, it can be sent to the SCADA system using the MQTT protocol so it can trigger an alarm to the operator. All this is happening at the edge, in a safe and low-latency local network.

Although SCADA systems are not yet ready to run software like TensorFlow, the scenario described is already possible using open source software together with an existing SCADA system on the same hardware or an inexpensive computer on the same network. System integrators also have to evolve, as this scenario requires intensive customization with skills much more related to software than automation. On the other hand, a scenario like this also requires the specific market knowledge that integrators have.

Mario Gonsales Ishikawa is an advisor at scadaHUB Technology, a member of the Control System Integrators Association (CSIA). For more information about scadaHUB, visit its profile on the Industrial Automation Exchange.

 

More in Control