Vision and Control on One CPU

Nov. 19, 2020
Though combining multiple control functions on one controller is not particularly new, the high compute and bandwidth requirements of vision have typically required it to exist on a separate controller. Technology advances are changing that reality.

Combining what have long been separate automation functions into a single control environment has been an ongoing industrial automation trend for years now. Two of the most prominent examples have been the combination of safety and control applications in one controller and the development of platforms that connect a range of devices and software in one Internet of Things (IoT) environment for monitoring and control.

To get insights into this trend, specifically around the combination of vision and control technologies, we connected with Daymon Thompson, automation product manager for Beckhoff Automation, for a recent episode of the “Automation World Gets Your Questions Answered” podcast series.

Explaining this trend of combining automation technologies, Thompson said it typically occurs in one of two ways. “One is where the functions are housed in separate pieces of hardware connected via a backplane. In this case, they each have separate CPUs and essentially remain separate pieces; they just look good together in the cabinet,” he said. “The other approach is to recognize that every one of those CPUs has software running on it to manage its function. Here, the developer takes those pieces of software, integrates them onto one bigger CPU, and then combines them into one programming environment. That's the bigger trend we see with several industrial control suppliers. And it’s really the way I think every automation controller will end up going because the added performance of this approach can be measured in the controller’s total cycle time and update rate. And that has obvious beneficial effects on overall machine production levels and total machine output.”

Integrated Architectures 
A key aspect to look for in these integrated technology environments is how they are structured, advised Thompson.  One method of doing this is to combine the technologies, but to close them off so that only the supplier’s technology works with it. “And, if one day you need more than that, you'll likely have to add another PC and expand it with data logging or database connections, for example,” he explained.

The other method is what Thompson called the “open path,” wherein the customer can add to the system. “If the customer wants to do things like implement their own C# front end, or advanced data logging, or third-party software, the open path allows for those kinds of things,” he said. “And the open path makes it so much easier to combine new features. For example, with IoT you can take an existing controller and, rather than have to add a piece of hardware to the cabinet, you can just add another software module to the existing CPU using a REST API or MQTT. You can also add functionality for things like new advanced motion capabilities or linear transport system collision avoidance.” 

Connecting Vision and Control
At Beckhoff, a major area of focus around combining automation technologies has been on bringing machine vision into the overall machine control environment. Thompson explained that Beckhoff’s approach here has been about “minimizing system architecture complexity while remaining very open—allowing future functionality to be added without having to exchange or throw away hardware.”

He said Beckhoff’s objective here is not just to bring software objects together in an open environment, but to ensure that they're “seamlessly integrated inside the engineering environment so that there is one controls program for the machine, all its settings, and all the needed machine software, like HMI, PLC code and database, motion, servo drive configuration, IoT communications—everything in one product.” 

Thompson explained that Beckhoff has done this by opening up “the real time in our controller (TwinCAT 3 Real Time) to these various modules to the extent that customers can use the same fundamental architecture and APIs that Beckhoff uses to plug in different components, like C++ code or MatLab/Simulink code.”

“That's the exact strategy we've taken with our vision integration,” said Thompson. “We've taken machine vision software algorithms and plugged them into one of these software modules, and we execute these right inside the real time. This is ideal from a controls engineering standpoint because the developer tools have everything needed for vision, such as connecting to the camera and configuring camera parameters. It all lives in the same environment as the PLC and drives.”

Because configuration of all these functions can be complicated, Thompson said it “all boils down to the PLC library. Our PLC library for vision functions, for example, has more than 500 functions built into it. And those functions can be used to address everything from simple manipulation or preparation of the image to contour finding and matching, all of which can be used in any PLC programming language.”

On the runtime side, because the algorithms have been implemented into the software modules, all tools programmed in the PLC are executed in real time, “which means very fast execution,” Thompson said. “Machine vision, for example, can be synchronized very closely with a motion controller, robotics, and I/O. And because all this is done on software inside the PLC, users can change vision algorithms, change vision parameters, and add new algorithms or tool sets in the middle of an image processing sequence, and then implement them while the machine is running. It makes it super-efficient engineering, especially for OEMs.”

Application
Considering Beckhoff’s history of combining automation functions in one controller, we asked Thompson to provide an example of a company who had done this and the benefits they derived from it.

He referenced a customer who wanted to retrofit an older machine so that it could duplicate the functionality of other, newer machines in the facility. This company had planned to replace the vision system on the older machine to match what was used on the newer machines. Instead, Beckhoff showed the company how it could implement the TwinCAT Vision software, so that machine control and vision were combined in one system.

Thompson said this company saw a 50% savings in costs versus replacing the older vision system as they had originally intended. “All they had to do was plug in a GigE camera they already had into their existing Beckhoff controller,” he said. “They didn’t even have to update the CPU.”

He added that this company also saw a 15% increase in production with this change, as they were able to process products and react faster using the new vision algorithm versus the previous system.

Companies in this Article

Sponsored Recommendations

Wireless Data Acquisition System Case Studies

Wireless data acquisition systems are vital elements of connected factories, collecting data that allows operators to remotely access and visualize equipment and process information...

Strategizing for sustainable success in material handling and packaging

Download our visual factory brochure to explore how, together, we can fully optimize your industrial operations for ongoing success in material handling and packaging. As your...

A closer look at modern design considerations for food and beverage

With new and changing safety and hygiene regulations at top of mind, its easy to understand how other crucial aspects of machine design can get pushed aside. Our whitepaper explores...

Fueling the Future of Commercial EV Charging Infrastructure

Miguel Gudino, an Associate Application Engineer at RS, addresses various EV charging challenges and opportunities, ranging from charging station design strategies to the advanced...