First, the good news: Ethernet, the 2000-pound gorilla of global networking, has arrived on the plant floor, bringing with it new flexibility, ease of information sharing and economies of scale.
Now, the bad news: Like any powerful creature, Ethernet can wreak havoc if not handled carefully. Specifically, many of Ethernet’s advantages, such as its openness and flexibility, pose new and dangerous risks to network uptime.
The first step for manufacturing decision makers in determining when and how to use Ethernet to network their control devices and subsystems is to recognize the differences between how this networking protocol is used in millions of enterprise networks globally and how it will be used on the manufacturing plant floor. Step two is to build security and reliability into the system from the ground up, and step three is to sustain that level of reliability as devices are added over time to an existing network.
Viva le difference
The drive to use Ethernet is both financial and functional. As the most widely deployed networking technology on the planet, Ethernet features the lowest cost for common components, as well as installation and training. When a plant floor network moves away from industry-specific bus networks and toward a global standard, then it becomes easier to share information about plant performance throughout a company—seamlessly. And Ethernet is infinitely scaleable.
It’s already a recognized fact that Ethernet systems and devices aren’t simply making the giant leap from the boardroom to the manufacturing plant without modification. Equipment vendors are incorporating much of what they learned from proprietary plant networks into industrial Ethernet, including the hardening of Ethernet routers to survive the hostile environment of the plant floor, and the use of DC power versus an AC power source. Advanced Ethernet switching, message prioritization protocols and higher-speed Ethernet are all technological developments that pave the way for this networking technology to link the plant floor with the corporate network.
But those already involved with this technology say it is important to realize from the outset how Ethernet will be used in a different way to automate plant networks.
“The main focus in the IT (information technology) world is security and server availability,” says Mark Fondl, president of Newburyport, Mass.-based Network Vision, a company that makes software to enable plants to monitor the health of their networks. “In IT, they focus on making sure no one can access the network that isn’t supposed to. On the plant floor, as Ethernet gets deployed, the major issue for the controls people is uptime of the network—survivability in case of a particular failure.”
After all, the worst thing that happens when a corporate local area network (LAN) runs out of bandwidth is that office workers have to wait for data to arrive, and may get impatient or even be less productive.
“That’s why in an office network, bandwidth is generally oversubscribed,” says Roy Kok, HMI/SCADA Product Manager for GE Fanuc, Charlottesville, Va., a global supplier of automation controls that has standardized on Ethernet. “But I can’t have any oversubscription in a plant,” he says, particularly when process control or life safety issues are involved.
The need to maximize uptime dictates how a network will be physically laid out, or its topology, and the physical medium to be used.
Get your fiber
The cheapest method of wiring is basic twisted-pair copper, but this is subject to electromagnetic noise, which can be generated by florescent light fixtures or electrical wiring, and in some installations that traverse buildings, to lightening. Fiber optic cable is immune to noise, provides the highest speeds and greatest bandwidth, but at a higher cost and installation complexity, because fiber optic cable has less flexibility. Another option is coaxial cable, which is slightly more expensive than twisted-pair, but less vulnerable to electromagnetic interference.
The most recently available option for plant floor networks is wireless LAN technology, which offers enormous flexibility. It has come rapidly down the cost curve and become more widely available, but is still considered something of a security risk.
“The security piece on wireless is getting better all the time, but there are still some problems,” says Mike Broussard, product marketing manager, Transparent Factory and HMI, for Schneider Electric’s Automation Business, based in North Andover, Mass. “People may not be able to get at your data, but there is still the possibility that someone could get on your network and flood it with enough traffic to shut it down.”
Whatever the network medium, Broussard says, the most important thing is to have a plan for how to provide redundancy in the physical network.
“People are beginning to realize that this is a major weakness,” he says. “Companies would invest thousands of dollars in programmable logic controllers, industrial equipment and computers. That equipment is not the main point of failure. The problem is often going to be in the wiring—a surge on your copper, because of lightning or a motor that started nearby. And if you have a bad wiring plan, none of that equipment helps.”
On this front, manufacturers can take a page from the telecommunication industry’s book on keeping networks in service. Since the late 1980s, telecom service providers have focused on building self-healing ring networks that route traffic simultaneously in two directions, allowing an end device to take whichever signal is better, and providing continuous service in the event of a wiring fault, a failed device or any other problem on the network.
The other key topology issue is the notion of segmentation, or using Virtual LAN software to logically—not physically—carve up a plant network into separate segments, rather than run the whole thing as one big network. Logical divisions are done in software, not hardware, and thus can be more easily changed when needed. Creating network segments by work group or other designation can enable large networks to be more easily managed and can serve to confine network problems to one area, which helps in identification, says Broussard.
Determining the degree of redundancy and segmentation to build in really comes down to determining risk, points out Derald Herinckx, remote I/O product manager for GE Fanuc. “Users need to make decisions about the complexity of the network based on the amount of risk they are willing to tolerate,” he says. In some cases, the application, such as logging data about performance, doesn’t require full redundancy. “If you are looking to concentrate data, there are applications with store and forward capabilities that will let you backfill a data log if the network goes down,” says Hericnkx. “Redundancy is expensive, so you should use it only where you need it.”
Adding VLAN and segmentation to a network will add some cost and complexity, but for networks that absolutely have to be fault-tolerant, that makes sense.
Even a well-designed Ethernet, built to be fault-tolerant, will experience problems, however, that threaten network uptime. This has been true of past networks as well, but with the move to Ethernet, manufacturers face both the opportunity and the challenge to address the nagging issues of the past.
“Many customers for years have put up with periodic failures and periodic data errors because their networks are not performing optimally,” says Scott Lapcewich, director of service products for Rockwell Automation, in Milwaukee. “As networks become more complex and more critical, it will cost them more and more to continue to operate in that manner.”
The most common and frustrating issue for plant networks are the intermittent problems—those that occur for a brief period, then disappear, only to reappear at an unpredictable time.
“Intermittent problems are the leading cause of network frustration,” says Network Vision’s Fondl, who paints a picture of a typical scenario: A purchasing agent buys low-cost Ethernet network interface cards for an operator station, not thinking about the fact that the $13 cards aren’t robust enough to work around the clock. For the first six months, the RJ-45 connectors in the card work fine, but after that, they start being affected by the vibration of a nearby motor. The outage only happens when the motor starts, so by the time a network technician checks it out, everything is fine—until the next time.
One way to address such problems is through network monitoring, in which the performances of individual links in a network are constantly watched, and any failure is traced to its root cause in order to be fixed.
Another way to limit such problems is to just follow directions. Lapcewich says plant operations personnel need to pay more careful attention to what is being placed on the network and make certain that devices are being used as they were intended.
Fondl believes that plant operations managers who are taking the Ethernet plunge need to get to know their company’s IT staff and work with them to address the critical issues. But everyone admits that can sometimes be a tricky matter.
“There might be some turf issues—plant guys not wanting the IT guys in their hair,” says Lapcewich. “These guys that run these plants are accustomed to running their own shows. This will bring more corporate influence into the plant environment.”
“We think it is going to be cost prohibitive for companies to switch to Ethernet quickly,” says Mike Bryant, head of the Profibus Trade Organization, which supports one of the many widely deployed bus network protocols. “There is a whole transition that has to happen. And there are still a lot of questions to be solved.”
Reason to wait?
Profibus and Fieldbus are both developing Ethernet versions. Although they won’t be available until at least 2004, some organizations may prefer to wait. Even Kok, whose company, GE Fanuc, has standardized on Ethernet and believes it is the network of the future for the plant floor, admits there will be resistance, in part because of concerns that massive change breeds massive unreliability.
“If you have a fieldbus network, then you will tend to continue going in that direction,” says Kok. “If there is a new plant or new system being built, then there is greater likelihood of going toward Ethernet now, especially if the application is being driven from the IT perspective, because the IT people will definitely prescribe Ethernet.”