The Future of Industrial Networking

During the Automation World Conference & Expo, a panel of technology experts discussed how advanced networking technologies will lead to open, standards-based communication from the sensor to the cloud.

The Future of Industrial Networking
The Future of Industrial Networking

There’s still a lot to understand about industrial networking, especially since Ethernet is now “everywhere,” from the enterprise to the factory floor. But there are different Ethernet communication protocols at the fieldbus level and there are advanced technologies that add to an already complex network architecture.

Factor in recent announcements on technology standards like Time-Sensitive Networking (TSN) and OPC-UA, and more discussions surfacing around how to handle the proliferation of intelligent devices related to the Industrial Internet of Things (IIoT), and the network is becoming the priority on the plant floor.

“As technology suppliers announce support for TSN, OPC-UA and other industrial Ethernet communication protocols, coupled with the technologies rising to connect legacy equipment with data aggregation and analytics, it is becoming clear that the network is at the heart of automation today,” said Automation World Director of Content/Editor-in-Chief David Greenfield during a panel discussion at The Automation World Conference & Expo in Chicago last week.

The panel, made up of six executives from Inductive Automation, Moxa, Oden Technologies, OPC Foundation, Opto 22 and PTC/Kepware, discussed the issues influencing the evolution of the network. Specifically, they addressed the need for open communication from the sensor to the cloud, the need to collect more data and analyze it quickly, the need for cybersecurity and, most importantly perhaps, the need for a business case around the new technology.

“There’s a lot of advanced networking, but I would caution that if the primary motivation is purely for IoT or Industry 4.0 sake…because you don’t want to get left behind…that is potentially very problematic,” said Eddie Lee, director of global industry marketing for Moxa. He was referring to research that found that successful Fortune 500 companies experimenting with IoT had a specific business case or outcome they were trying to solve. “If you are talking specifically about OEE and trying to improve productivity by 8 percent and the idea is that IoT technology and the advanced networking that comes with it will enable that, you have to have a specific metric of ROI to help calculate or prove out why you are making this investment and taking the risk,” Lee said

Travis Cox, co-director of sales engineering at Inductive Automation agreed, noting that it requires a closer look at what’s happening on the shop floor. “I think there’s always been a need for more use of the data that the customers have and interoperability between apps, but SCADA, for a long time, hasn’t been able to talk to higher levels like ERP or maintenance management…so as there’s more demand on data and processes, it’s driving [organizations] to take a look at the architecture they currently have. A lot of times the data they need from the business side is not accessible on the operation side. So, they are taking a step back to see how they can do things differently. And a lot of that has to do with the network.”

But doing things differently has been a problem, as a lot of the onus is on the technology suppliers to change the way they approach the market.

“The biggest change we’ve seen in industry over the years is that end users are starting to speak out about their level of frustration with propriety systems,” said Tom Burke, strategic marketing officer for the OPC Foundation. “They don’t want to see this anymore and particularly in the network. They are tired of buying a valve that doesn’t’ work because they don’t have the network infrastructure to support it.”

With that pressure coming from the end users, the vendors behind fieldbus protocols, like Profinet, Ethernet/IP, EtherCat and others realize they need to work toward communication standardization. “These guys recognize that this makes no sense to make the vendors [and end users] do so many different things,” Burke said.

Enter TSN.

TSN is the IEEE 802.1-defined standard technology that provides deterministic messaging on standard Ethernet. Originally developed for audio/visual applications, TSN guarantees delivery of real-time applications on standard Ethernet using scheduling, which makes it a good fit for industrial applications, like machine control.

To that end, the biggest knock on standard Ethernet on the plant floor is that it’s not deterministic. You can’t guarantee an e-mail message on the enterprise network won’t interrupt delivery of control signals to an actuator, for example. So the industry is now rallying around TSN to solve that problem. And, the great equalizer here being the adoption of OPC UA over TSN.

“In November of last year, the OPC Foundation announced OPC UA over TSN and there are about 22 vendors that have decided to participate, and this is going down to the field,” Burke said. End users should be educated on this industry standardization and prepare to build it into their future network plan. “Will it replace the industrial networks of today? Not in my lifetime. But it will start to displace them over time.”

There’s still a long way to go for total TSN adoption, but it will happen. “If you are a machine builder, you might be waiting for your customers to demand a certain level of connectivity or technology before choosing to implement,” Moxa’s Lee said. “But if you are an end user, you have goals about improving maintenance or productivity, that might drive faster adoption.

It’s not just how to get away from the “fieldbus wars” that is of concern to manufacturers, but also how to access and analyze data wherever it may be.

On the Edge

The cloud has long been thought of as the place to go to get the horsepower needed to run sophisticated analytics that can provide detailed insights that an operations team can act on. But is that enough?

“The cloud provides capabilities we can use which, until now, were unheard of,” said Peter Brand, COO & co-founder of Oden Technologies. “But if you don’t have the network to get that unified and normalized data set first, you won’t get too far. And that’s where the edge comes into play. Of all of the insights to models that you are building on the cloud, ultimately, there will be some mission critical apps that still need to run even if the Internet goes out. If we can push those models developed on the cloud to run down at the edge, then we have the best of both worlds.”

In addition, there are so many devices—including wireless—on the network that are trying to communicate, and the volume of data out there is overwhelming. But there is no need to push all of that information to the cloud.

“Today we have access to processing technology, microprocessors, industrial components including solid-state drives, we have a tremendous amount of compute capability at the edge where the source of the data is,” said Benson Hougland, vice president of marketing and product strategy at Opto 22. “That is a key aspect we have to remember. If you look at traditional implementations that we’ve seen over time, it is a bunch of different layers stitched together to get a data point into a piece of contextual data that someone can do something with. When you stitch things together, you introduce security vulnerabilities, licensing issues, and do you have the domain expertise required? It’s a very brittle architecture. So the notion of putting as much as possible on the edge eliminates the brittle components and at the same time enhances security and performance to a level not seen before, but made possible by applying processors and industrial components as close to the data source as possible, which is the edge.”

Cox agreed, pointing to Inductive Automation’s Ignition development tool which can map legacy equipment to provide context. But the network is becoming more powerful in its own right.

“The single source of truth idea is the biggest part of an architecture change,” Cox said. “These devices are smart enough and have the computing power to provide all the context needed and to also have consistent models across plants, because if you want to bring [data] up to the cloud for analytics, you need to know the assets in plant ‘A’ and plant ‘B’ have consistency and standards. And all of that is being defined at the edge. It is a really important point.”

Ray Labbe, principal applications engineer for PTC/Kepware reiterated the earlier point that it’s important for end users to focus on the use case and the application involved before looking at how to get access to the data.

“It comes back to the maturity level of you as a customer or your facilities...Especially around smart manufacturing and IoT initiatives being brought forward and trying to connect to legacy assets. It takes a disciplined approach to getting connectivity. First get the infrastructure in place to access data and then [figure out] what the data is [that needs to be accessed] and build a strategy around it.”

That strategy can take various approaches and use different technologies, from PTC/Kepware software to gateways and wireless. “It’s important to know you have options to bridge or access devices and build contextualization as close to the source as possible,” Labbe said.

 

Security

As industrial Ethernet becomes the de facto standard on the plant floor network, Automation World’s Greenfield asked the panel, “What options do we have to prevent hacking?

Moxa’s Lee said there’s no question that the more things are connected, the more security vulnerabilities they create. And there are many security options from secure remote access end points to network management. “It’s complicated, there is no easy answer. But knowing where to start is the key,” he said.

Perhaps that starts with a conversation between the IT and the OT teams, said Opto 22’s Hougland, noting that despite the fact that Ethernet has reached a level of ubiquity, the architecture is still componentized. There’s the Cat5 cable and then what runs over the copper with other variables on top. And, when you give access to a device on the network, you have to provide a way for it to accept messages and respond to messages. “That inherently is a security risk,” he said.

So, as an example, Hougland said, you turn to IT to solve the security problem. “But think about what you’ve done. If you have a PLC or remote I/O unit and you need data, you go to IT and say you need an IP address. And IT says okay, but what is this PLC, I don’t know how to manage that. You’ve shifted the responsibility of security to IT, which is why they turn into the department of ‘No’.”

Fundamentally, there needs to be a way to overcome that problem, which there is, with technologies like MQTT and publish/subscribe set-ups. That gives a device the ability to publish data continuously to a broker and another device can connect to that broker to get that data without creating a security vulnerability.

“Decoupling the device from the applications is the most important piece of it,” said Inductive Automation’s Cox. “In the pub/sub methodology, the data producer—which is a sensor or PLC—doesn’t know who the end consumer is and the consumer doesn’t know who the device is. It allows for a lot of advantages like plug and play, heightened security, better bandwidth usage because you are reporting data by exception not polling devices and auto-discovery as more things are added.”

The ability to get the data in the first place is why the pub/sub model is exciting, the panelists agreed.

“But beyond accessibility of data, it’s the context of information and going as close to the source as possible that is the real paradigm shift,” said PTC’s Labbe. “It will provide a more event-driven model of data exchange that not only minimizes impact to the network, but simplifies the connection to the data source.”

 

 

More in Networks