In a traditional supervisory control and data acquisition (SCADA) architecture, all data sources in the field are polled from a centralized host. This requires all raw data to be requested and provided across the network so that it can be stored, monitored and analyzed in the enterprise using various applications (such as SCADA, historians and analytics). There are many potential problems with this traditional SCADA structure—including bandwidth limitations and costly network burdens.
Many companies are struggling to support the number of connected SCADA devices across their networks. But industry leaders see pushing data collection—and some analytics—to the edge as a potential solution to alleviate network bandwidth limitations and security concerns.
What is an edge solution?
The edge is defined as the network entry points or data sources in the field on the opposite end of the network from the centralized host. In networking terms, an edge device provides an entry point into enterprise or service provider core networks. Examples include routers, routing switches, integrated access devices, multiplexers and a variety of local area network (LAN) and wide area network (WAN) access devices. Devices and sensors built for the Industrial Internet of Things (IIoT) with access to the network are also considered edge devices. There are many components of an edge architecture, and it is likely that the solution will include hardware and software from multiple vendors.
Customizing an edge solution
Data transmission is not free, and reducing the amount of data transmitted across the network is a potential cost savings benefit. Even a low-scale initiative that includes some analytics performed at the edge can help reduce data throughput and increase data consolidation. For example, a simple data reduction initiative could involve publishing only data changes across the network—i.e., if the value of an endpoint has not changed, there is no need to publish that same value across the network.
More advanced analytics can also be pushed to the edge. By storing the data locally, models can be developed and patterns recognized by analytic applications running on the local gateway or industrial PC. For example, predictive analytics performed at the edge can apply machine learning techniques and applications to predict certain outcomes before they happen. This could be done locally at the edge with periodic updates on predicted outcomes and alerts if undesirable trends are predicted. Instead of performing centralized data analytics to determine what caused downtime after it happens, an edge solution could help companies prevent an undesired state entirely with real-time predictive analysis.
Prescriptive analytics, which use optimization and simulation algorithms to recommend changes to achieve a desired result or state, may also make sense to be performed at the edge. The ability to make decisions locally and quickly based on collected and analyzed data can significantly improve an organization’s efficiency and safety.
A large factor in determining how much analysis to push to the edge is how much bandwidth is available in each setting. It might make sense to just perform data reduction and consolidation at the edge and let the cloud or host do the majority of the enhanced analytics. In more challenging communication scenarios, more analytics are likely to get pushed to the edge. In cases where instant analysis and decisions need to be made, the more analytics performed—and decisions made—at the edge, the better.
For more information, visit PTC at www.ptc.com.