On The Coattails Of Interoperability

Thanks to standards ranging from ISA-88 to ISA-95 and beyond, interoperability has become easier than ever—and with it, performance management.

There are a number of ways to map out the world of performance management. One of the more scenic ways is by following the many highways and byways of interoperability. Performance management consists of the means and methods of monitoring enterprise activity to track progress toward specific goals. Taking the broadest possible view of a manufacturing business, these goals usually include:

o designing and making products that people want

o making sufficient numbers of those products to satisfy the need in a timely way

o making enough money to stay in business (with the ideal of making more than enough, i.e., achieving profitability).

On the surface, performance management and interoperability seem to have few links. Interoperability describes activity among devices, machines and computers (and their respective control or operating systems), as a capacity for communication and transference. When two devices are interoperable, commands can go back and forth between each, and can be carried out by both. ISO/IEC 2382-01, a standard of the International Organization for Standardization and the International Electrotechnical Commission, refines this definition as “the capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units.”

Interoperability can apply to entire computing environments, as when a manufacturing information technology (IT) interoperates with enterprise IT. Here, consistent, and generally, standardized data flows along standard communication paths. Importantly, the concept can be extended to people and organizations—where it becomes business process interoperability. To make it work in this context, you must make some basic assumptions, including the assumption that people want to share information and data, and they want to pull together for a common cause.

Safe to say, you can assume these in an effective organization. Still, those who have traveled the road to effectiveness can tell you that it may be dangerous to assume either without some stock-taking and considerable training. Turf wars, apparently a natural part of any group, can make sharing difficult. Many other forces—including prods meant to enhance performance—can lead an organization to many levels of internal competitiveness that cripples cooperation.

Of course, these issues can be solved—all it takes is a lot of work. But there are follow-on challenges, one of which is the fact that business systems and manufacturing systems in effect speak different languages. Defining boundaries between enterprise and manufacturing systems is important in this context, and is a primary focus in the development of standards such as the Instrumentation, Systems and Automation Society’s ISA-95, when those standards address data traveling from manufacturing to the enterprise level and vice-versa.

But, again, these issues can be solved. Whether technical or human, the primary focus of interoperability is on getting things done—that is, it is desirable because one group or control wants a second group or device to perform tasks. When interoperability exists, hooks for performance management just come along for the ride, just as on a highway from Chicago to New York, there are plenty of cellular phone towers along the way that will allow you to communicate to just about anyone concerning anything. Because interoperability provides a technical infrastructure for information flow and monitoring, there is every reason in the world to take advantage of these available information conduits to skim information useful for performance management.

Interoperability was not part of the picture 30 years ago. In those days, if you wanted people whaling away at keyboards in the same way everywhere, you bought one brand of computer and put compatible terminals everywhere. And it was not the dream of controls manufacturers in that day either—if you needed a consistent system for multiple machines, you either built your own or bought machines from the same maker.

Todd Stauffer, marketing manager for process automation systems, at Alpharetta, Ga.-based vendor Siemens Energy & Automation Inc., points out, “There are three eras of interoperability. The first was distributed, proprietary control systems—everyone did things differently. The second phase came about because customers wanted to choose best-in-class approaches, which meant tying systems from different vendors together. It was a great idea at the time, but a lot of effort went into making things work together, especially over equipment life cycles. You got the joy of rewriting things whenever a system revision was released. The third phase is openness or interoperability, and to a large extent, we’re there today.”

Much of the impetus toward interoperability in manufacturing began in earnest in the 1980s, culminating today in the control systems integration standards ISA-88 and ISA-95. In addition, OPC, an open communications standard, and similar initiatives have resulted in standards and technologies for open connectivity among industrial automation and enterprise systems. Talk about standing on the shoulders of giants—the expenditure of gray cells to bring about cohesive standards is truly stupendous, and not many would fault them if they are not yet 100 percent cohesive, top to bottom. Fortunately, even if the industry still awaits heaven on earth, interoperability has become possible.

Says Matt Bauer, director, information systems marketing, at Milwaukee-based vendor Rockwell Automation Inc., and chair, MESA International (for Manufacturing Enterprise Solutions Association), “There is definitely a movement within manufacturing today, a momentum around broad-based standards. For a standard to work, it has to enjoy agreement among many people and many interests, and that has taken time to achieve. Fortunately, progress has accelerated dramatically over the last few years, with vendor after vendor locking into standards and building them into their product sets.”

Can a manufacturer today really buy disparate controls, computers and equipment, then simply turn them loose on the network? The answer is yes—for some. Siemens’ Stauffer notes, “Plug-and-play is there for a range of devices, but you should expect to do a certain amount of configuration. The good news is that it no longer requires an expert in System A and a 20-year-veteran of Device B, plus 10,000 lines of machine code and home-built look-up tables.”

Many systems permit relatively easy configuring through profiles, allowing a dialog-driven, fill-in-the-slot process for adding a new device or controller to a system. Such a profile defines the basic behavior and applicability of a device, and captures the metrics and diagnostics available. Finally, the profile captures the standard features and provides ways to define a more or less circumscribed range of vendor-specific value-added features. Behind the scenes, the inputs from the integrator are translated into the calls and messages that will flow into and out of the device. The key time and maintenance advantage of this profiling approach is that the profile requires no custom programming. Once created, it draws on standards-based communication capabilities, and when the profile is updated, there is no need to rewrite code.

According to Stauffer, profiles offer four primary benefits. First, interchangeability is easier than with soft- or hard-coded links. “Switching from one vendor’s device to another has never been easier,” he says. Second, less time is required to integrate a device into a cell, facility or system. Third, less training is involved for the integrating group, and often, for end-users as well. Finally, troubleshooting is easier than with hand-coded integration efforts. “There is a tradeoff,” Stauffer cautions. “Some vendor-specific features that fall outside the standards will not be accessible.”

Regardless of the specific tactics for integration, the range of devices and domains that can be made interoperable has broadened considerably in the last few years. “We are seeing genuine interoperability among all three levels important to manufacturing,” says Kevin Tock, vice president, manufacturing execution systems and production and performance management, at Wonderware, a Lake Forest, Calif., automation software vendor. “It’s now possible to build it into controls layers, into MES (manufacturing execution systems) and manufacturing IT systems, on up to business systems levels. It’s significant that SAP (the Walldorf, Germany-based enterprise software supplier) is now committed to
ISA-95 in response to users over the last three years.” He adds that industry is seeing the culmination of many development efforts: “The automation world is no longer an all-one-vendor or nothing proposition. It takes less and less effort these days to extract information from controls, or make a home-grown system co-exist with third-party technologies.”

In an interesting twist, now that all sorts of data can flow across multiple levels, systems analysis requires ever-higher, ever more comprehensive systems views. No one function can possibly see the whole shebang. The reality is that there are essential differences in the membership and thought processes of the team tasked with integrating one workcell of a dozen machines or controls, compared to the team trying to integrate an entire factory. Likewise, there are big differences in outlook between the one-factory team and the group required to structure a global production organization. And the top layer, interconnecting an entire enterprise, from general accounting ledger to incoming raw materials, takes yet another set of perspectives and methods.

“It’s best to start as high as you can, with a clear picture of the end state you want,” says Joseph Coniker, managing director, enterprise performance management for MarketSphere, a consulting group that provides strategic, operational and technology consulting to a range of service and manufacturing clients. “You need to define how you want to manage and report on processes. And you want to get a clear sense of how much interrelationship between functions that you need. Granted, it’s good to have best-of-breed approaches and solutions, as well as broad-based interoperability and widespread coordination. But in the end, you have to know how you want to manage to make sense of the whole.”

He continues, “Once everything is in place and you have all the data at your fingertips, the question becomes what to do with it? To answer that, you have to know, really know, what’s driving your business. You might gain high value from upping production throughput, but we’ve had clients who improved their bottom line by a million dollars a year simply by streamlining collections from customers.”

For Coniker, building a new system structure begins with recognition of the operational and organizational investments that work in the current situation, then leveraging those. “You can then move on to a data assessment,” he says, “putting together a comprehensive picture of the systems, hardware, tables, transactions, data warehousing, history—the key is to understand what data exists, and why.”

Final decisions can then flow from a strategic decision about which function to tackle first. “Where is your best, quickest return?” Coniker asks. “Customer systems? Finance? Human resources? Manufacturing? The real key is to take one step now. Many groups develop a point of fear around all the data, all the systems, all the approaches, all the needs. They end up feeling that, if they move, something will break. The key is to start, break something if need be, but just get started.”

Stauffer agrees that “the most successful integration efforts begin with business requirements. Control engineers in isolation can specify well-defined systems, but the system might resist integration with management domains. It’s better when every stakeholder is involved from the start, and every standard and specification is compared against projected needs. Ease of installation is nice to have, but not if elements are painful to revise, hard to operate or difficult to maintain.”

Wonderware’s Tock suggests awareness of the potential gap between IT and manufacturing. “Manufacturing can gain from the standardization and technology rollout approaches that enterprise IT groups have had for many years,” he says. “And IT can gain by understanding the kinds of real-time data that has to flow freely in manufacturing. The fastest-moving users we’ve seen, the most effective companies, have built a manufacturing IT function specifically to bridge the gap in a way that’s effective for both enterprise IT and automation.”

And Stauffer points out potential chasms at lower-levels as well. “In the past at many facilities, distributed control systems and programmable logic controllers have been the domain of electricians, while process control systems have been under maintenance. This rigid allocation of resources with its boundaries and restrictions has begun to break down, particularly when there is a defined manufacturing IT function overseeing both.”

A final benefit of a well-defined manufacturing IT function lies in the opportunity to build systems expertise based on direct experience in what manufacturing really needs, something that can be hard for enterprise IT to discern. Stauffer adds, “A strong manufacturing systems group can also help overcome the almost inevitable pressures from business realities—for example, a solid knowledge base can compensate for short windows for installation, because experience cuts project times.”

Rockwell’s Bauer concurs. “We are definitely seeing the familiar patterns of traditional IT being applied to the manufacturing side—standard, approved equipment, well-planned deployment. The move is changing the nature of spending patterns and the culture within manufacturing. In particular, the move to standards relates to interoperability, because standards permit in-depth connectivity.”

Despite hewing to ISA-95 and OPC, each supplier provides its own wrinkle. Some emphasize control bus architectures, where data flows are continuous and tapped by devices and users as required. Some prefer a publish-and-subscribe approach, where specific data is bundled and supplied to specific nodes when those nodes ask for them. The end result is not much different. Bauer says, “Interoperability is a way to ensure that groups can share information, seamlessly, efficiently, to the right places, in the right forms.”

Interestingly, once all is connected, the free flow of information may not, in and of itself, be a blessing. Tock notes that heavy flows of performance data can create an overload. “If people have no information about their impact, they’ll still do their jobs,” he says, “though they may or may not have a clue about what to do better. On the other hand, if too much performance management data is handed to them, their eyes may glaze over. They’ll end up not knowing what they’re looking at. It’s often better to have less information than too much.”

The upshot is that, with interoperability in full swing, data reduction systems such as performance dashboards or trend displays must boil down ever-increasing data flows from ever-wider ranges of devices, controls and systems. Wonderware offers a set of precepts for integration that give some insight not only for tying elements together but also for data reduction:

o Don’t speak of technology before speaking of business processes

o Define the border—and overlaps—between MES and enterprise resource planning (ERP) systems

o Determine who owns what data, especially master data, recipes and other key manufacturing requirements

o Determine the best source of information

o Work out data transformations between ERP and shop floor, translating temperatures, levels, speeds, work orders into customer orders, costs, delivery dates, performance indicators.

Once system borders, information sources and data owners are clear, performance management indicators can be defined across the board.

Bauer says, “We are seeing an evolution, with plant information systems that are more comprehensive than anything up to now. We’re starting to see very high-level integration frameworks being seriously considered for plant-wide information systems. It’s an excellent opportunity for an SOA (service-oriented architecture) approach, where there’s a single flow of data, a single version of the truth, if you will. From this single flow, information can be extracted and sent to each functional area that needs it.”

Tock suggests that the primary performance management data objective is awareness of and control over asset utilization. “Many large companies are well on the path of making any product, anywhere. The key is to get everyone in synch with consistent data. If you have four plants, one in the United States, one in Europe, one in South America and one in China, critical concepts, such as capacity, overall efficiency, distribution and labor costs—all these have to be defined the same way.”

In the final analysis, interoperability results in software, departments and systems that work together. The growing enhancements to interoperability of standards and open technical platforms offer top-to-bottom data flows that—intelligently abstracted and reduced into key indicators—offer new capabilities, not just to operations, but to performance management needs as well.

For more information, search keywords “interoperability” and “performance management” at www.automationworld.com.

More in Home