Control architecture has changed little over the past 40 years. However, advances in processing power, network technologies, and software will enable greater value for end users in the near future by changing the way controllers are implemented and interface to the field. ARC Advisory Group believes that these new controller architectures, which support our evolving collaborative process automation system (CPAS) vision, will improve simplicity, flexibility and efficiency.
With few exceptions, the basic architecture of a process (DCS) or discrete (PLC) control system consists of a set of I/O cards logically connected or assigned to a single control processor housed in dedicated hardware. This has been the general state of affairs since the first digital controllers were introduced over 40 years ago. Initial control system incarnations consisted of a card rack in which a local real-time control processor communicated to a set of I/O directly coupled to the same backplane.
As network technologies advanced, systems began to employ architectures in which a single control processor might support several card racks of I/O connected via proprietary, deterministic protocols. Still widely employed today, this predominant architectural approach is effective, but potentially wasteful.
Generally speaking, every control processor is limited by three main parameters: the ability of the controller to handle I/O scans, diagnostics and program execution in a timely fashion; the capacity to store code, I/O maps and program variables; and the ability to handle the data transfer with the I/O and the Level 2 network. This often results in wasted potential.
An application may reach the limit of the number of supported I/O for a single controller, but the controller may be able to support much more logic processing than the application requires. This means that the user has probably paid for processing that is either not required or unable to be properly used. Alternatively, logic-intensive applications, like some batch applications, reduce the amount of I/O the control processor can support. If the application requires high availability, the extra hardware and software required amplifies the waste.
For remote I/O applications, the user may be required to have multiple racks of co-located I/O assigned to different control processors. Alternatively, the user might choose to have all the field data pass through one processor and pass those values (or other relevant field data) to another controller with available processing capability.
Controller and I/O interfacing
Unlike most current architectures, a new approach would have a common I/O network shared by all controllers and all field devices. This network would support a deterministic communication standard and allow any controller to address any field device. It would even allow multiple controllers and/or other applications to access the same data without intermediaries and permit peer-to-peer communications between field devices. The I/O network would support both traditional (analog) and intelligent (digital) field devices. Because such a network would support peer-to-peer communications, some applications would be implemented at the field level.
Through this decoupling of previously dedicated I/O and controllers, end users would be able to buy the appropriate amount of I/O for each physical area without the constraints of the controllers. Controllers would less likely have unused processing and/or unused I/O connectivity. Details that would need to be worked out include the number of network connections an I/O device or a controller could handle, network efficiencies, speed impediments, and how to migrate existing users. However, ARC does not believe that these are insurmountable challenges.
Control in the cloud
The cloud, as used in the IT world, isn’t deterministic enough, available enough or fast enough for most Level 2 control applications, though it may be in the future. However, the decoupled architecture would enable a “local cloud” or virtualized control platform much like today’s virtualized IT environments. This architecture could meet the requirements of determinism, availability and speed of response.
In this scenario, ARC envisions a set of hardware hosting multiple real-time control instances or hosting a single control entity that grows with the application. The hardware would run a real-time virtualization platform similar to the corresponding IT equivalent, and could be dispersed throughout the facility. This platform would ensure real-time communications between the virtualized controller instance(s), and between the controller instance(s) and the I/O. In a manner similar to IT virtualization, the platform would also handle load balancing and failure mode recovery.
The hosted controller instance(s) would run in a real-time manner similar to current controller implementations, with each instance running similar execution environments to today’s equivalents. From a user standpoint, the interface to these controller interfaces could be nearly identical. Because of the purely software nature of the controllers, they could be licensed just like any other virtualized software platform, “spun up” nearly as quickly, and the virtualized instances could be managed with tools similar to the current IT virtualization tools.
>> Mark Sen Gupta, firstname.lastname@example.org, is senior consultant at ARC Advisory Group. He has more than 24 years of expertise in process control, SCADA and IT applications with companies such as Mobay, Honeywell, Plant Automation Services (PAS), CygNet Software and Invensys. He holds bachelor’s and master’s degrees in electrical engineering from the Georgia Institute of Technology.