Vision-Guided Robots Look at Growing Tasks

Nov. 2, 2015
Machine vision is freeing robots to seek work beyond the kinds of highly repetitive jobs typically found in mass production. The gift of sight makes them cost-competitive even in moderate volume production applications.

Today's industrial robots are not the blind behemoths of yesteryear that simply repeat the same motion over and over again. Just look at the vision-guided robots that companies like JMP Engineering have deployed. The machine vision integrated into these smaller, smarter robots permit the iron-collar workers to perceive their operating environments and react to what they find there.

An important ramification of this ability is that the robots can work with parts that are not precisely located or clearly separated from one another. In one application, the team at JMP Engineering, an engineering company headquartered in London, Ontario, configured a vision system to give a pair of Motoman HP50 robots the ability to pick thread protectors from a bin and screw them onto the ends of pipes used for making tools for the oil and gas industry.

In a cell built by JMP Engineering, a picking robot relies on 2D machine vision from Cognex to identify and locate thread protectors in a bin. A VisionPro software application shows the location of thread protectors in the bin waiting for pick-up and the size of the pipe that they will be screwed into. Source: Cognex

“The application has demonstrated the ability to successfully pick and assemble thread protectors without fixturing or accurate locating in conditions that are common in oil-tool manufacturing,” says Scott Pytel, who was a project manager at JMP at the time the project was launched. “There’s a good chance that this application will lead to a new generation of vision-enabled robots that will help to improve productivity and quality in the oil-tool industry.”

The ability to automate this task is significant because oil-tool manufacturing is characterized by the production of a large mix of parts. Although the production of any individual member in the family of parts is low, the overall assembly volume is relatively high at three thread protectors per minute. The tolerances on the parts, moreover, are not tight enough to require precision fixtures. Consequently, justifying automation has been difficult on this application.

The thread protectors come in 11 sizes, their diameters ranging 4-8 inches. They arrive at the robotic workcell layered in bins, each layer separated from the others by sheets of cardboard. Once the robot moves into position, the Basler Ace camera mounted on it takes a picture of the bin and sends the image to the VisionPro vision system from Cognex.

There, a tool called PatMax uses Cognex’s geometric pattern matching technology to identify the thread protectors in the image and determine their locations. The tool does not use a conventional pixel-grid analysis that looks for statistical similarities between a gray-level reference image and the image taken by the camera. Instead, the tool learns the object’s geometry using a set of boundary curves and looks for similar shapes in the image without relying on specific gray levels.

Once the PatMax tool identifies and locates the thread protectors in the bin, the robot retrieves them one by one, passing them to the second robot, which screws them onto the pipes. “Assembling threaded fasteners is a challenging operation for a robot because the robot does not have a human being’s ability to feel the connection between the threads,” notes Kevin Ackerman, formerly a machine vision specialist at JMP.

To overcome this obstacle, the second robot has both its own camera and a compliance device. While the robot presents a protector to a pipe thread, the camera takes a picture of the pipe so that another Cognex software tool—this one for circles—can determine the location of the pipe more accurately and ensure that the diameters of the pipe and protector match. The compliance device on the robot arm allows the pipe’s threads to pull the protector into the pipe as it screws onto it.

A sweet spot for vision

Despite success stories like this, combining robots and vision has been relatively rare in classical robotics, namely the six-axis articulating arms and the SCARA models used in light manufacturing. “Historically, only about 10 percent of robots used vision,” explains John Petry, director of global solutions marketing at Cognex. “And only about 10-20 percent of that has been 3D vision.”

The situation is changing, though, especially in consumer electronics (above the circuit-board level) and other light assembly in Asia, Petry says. “This field is showing explosive growth, not only in the number of robots deployed, but also in the percentage using vision,” he says. The use of vision on these robots approaches 50 percent when the assembly task requires precision, inspection or both.

The growing use of vision in robotic applications is due largely to the steady evolution of technologies, according to Avinash Nehemiah, product marketing manager for computer vision at MathWorks. “It can be attributed to a reduction in the cost of vision sensors, the maturation of vision algorithms for robotic applications, and the availability of processors with vision-specific hardware accelerators,” he says.

He adds that complexity of the task is the main driver in the cost-benefit analysis for justifying the expenditures on a camera, vision software, powerful enough computing platform, and engineering for implementing vision. “The sheer complexity of programming a system to recognize multiple objects is too great without a vision system,” he says. In his estimation, machine vision’s cost advantages begin to dissipate when the number of objects is less than five and when other sensors can do the job.

But vision systems reduce the complexity of recognizing more than a handful of different objects. “The same input data can be used to recognize and locate multiple objects,” Nehemiah explains. Then, later, when you want to add an object to the group recognized by the vision system, it is just a matter of updating the software.

Vision-assisted robots have also been attractive wherever there is a shortage of skilled workers and whenever line speeds are too fast for human beings to keep up in an accurate and safe manner. “Historically, robotics has been very automotive dominated,” observes Klas Bengtsson, global product manager for vision and sensors at ABB Robotics. “Applications have been increasing in electronics and packaging, and particularly in food and beverage, for many years.”

Cutting costs cleanly

Besides the desire for boosting line speeds, food and beverage producers have the added problem of maintaining cleanliness and good hygiene in production environments. A case in point is the production of pancakes and other batter-based breads at the Dunstable plant operated by Honeytop Specialty Foods. Not only was the old manual packing process there labor intensive and inefficient, but it also was subject to human error and required significant effort and expense to maintain the company’s hygiene standards.

One of four vision-guided IRB 360 FlexPicker robots from ABB pick and stack 110 pancakes per minute at Honeytop Specialty Foods. A camera slightly upstream allows the robot to find the pancakes coming down the conveyor from the hotplate that produced them. Source: ABB Robotics

The need for greater efficiency drove the British producer of specialty flatbreads to turn to RG Luma Automation. The ABB system integrator installed four vision-guided ABB IRB 360 FlexPicker robots.

Now, human hands never touch the company’s products before they reach supermarket shelves. Conveyors transport pancakes and other breads from an automated hot plate, letting them cool as they travel through a series of cascades. As pancakes approach the four robots, a 4 Gb Ethernet camera mounted in front of each generates the images used by ABB’s PickMaster 3.2 software to locate every pancake as it passes below. Each robot can pick and stack 110 pancakes per minute. Thanks to the programming skills of engineers at RG Luma, this software can even recognize and locate overlapping pancakes.

In this case, the vision system is cheaper than the other main option for presenting objects to robots, which is to invest in hard fixtures that always arrange the pancakes into the same pattern so that a blind robot can pick them up while executing a preprogrammed routine. This expense can be quite high on production lines that undergo regular changeovers to produce more than one kind of product. Not only do a greater number of fixtures multiply the production and storage costs, but changeovers between batches also bring with them the cost of both downtime and any special equipment necessary to make the switch.

Consider a changeover that takes a half hour. “If you do it four times a day, that’s two hours of lost production time,” Bengtsson points out. Add this loss to the expense of buying, storing and moving the fixtures.

For this reason, relying on machine vision can easily be cheaper than going with hard fixtures. This option has the added advantage of being far more flexible because accommodating a different product is a matter of calling up another program in the vision system. “And that only takes a minute,” Bengtsson says.

Software streamlines the job

At Honeytop, not only are changeover times much shorter, but the PickMaster software also monitors production and calculates productivity metrics. These abilities are helping the flatbread maker to deal with tight turnaround times and deliver orders within 12 hours of production. Moreover, three weeks after the robots went into production, the software was also instrumental in the introduction of a new product to the line in less than an hour without any additional investment. The combination of these efficiencies, fewer errors, and lower labor costs generated a payback of less than a year.

Software can do more than make vision systems easier to use and maintain. Applications also exist for streamlining their design and configuration before deployment. Vendors have developed tools that simplify programming and integrating vision into a robotic cell, enabling engineers to model and optimize its use with several robots. Among the more than 50 tools in ABB’s Integrated Vision offering, for example, is RobotStudio, an application for programming both robots and smart cameras.

Bengtsson believes that such programming software is an important factor contributing to the growing percentage of robotic applications using vision. “The future for these types of technologies is to make them easier for more people to use,” he says. This includes more Windows-driven applications that are menu-driven and graphics-based.

The current generation of vision software also contains algorithms for overcoming the most common challenge for applying vision to robotics—namely, processing the large amounts of data in real time. “These algorithms reduce the amount of data processed by extracting features or discriminative information from the raw vision data, or by performing segmentation or object detection,” Nehemiah notes. This technique processes tens or even hundreds of objects, instead of millions of individual pixels.

Another method to hasten the processing of large amounts of data is hardware acceleration. Nehemiah offers MathWorks’ Vision HDL Toolbox as an example. The application helps users to design vision systems for field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs).

Collaborative robots and 3D vision

ABB’s Bengtsson expects the trend combining machine vision with robotics to accelerate as vendors develop collaborative robotic technology—robots that work directly with people or close to them without safety barriers. In these applications, feedback from vision systems and other sensors ensures the safety of the people working alongside the robots.

Nehemiah at MathWorks sees a similar trend. “Over the last five years, a large percentage of applications such as drones, humanoid robots, industrial collaborative robots, and autonomous ground robots has used vision systems as the primary means for environmental perception, or as part of a larger sensor suite,” he says.

MathWorks has been supporting this trend by introducing a number of tools in its MatLab software that help users to develop vision systems for these robotic applications. The Robotics and Mechatronics Center at the German Aerospace Center, for example, has used these and other tools from MathWorks to build a two-armed, mobile humanoid robot capable of performing assembly tasks.

Named Agile Justin, the robot has 53 degrees of freedom altogether, including 19 in its upper body, 26 in its hands, and eight in its mobile platform. It perceives its surroundings through stereo 2D cameras and RGB-D sensors in its head, torque sensors in its joints, and tactile sensors in its fingers. The two 2D cameras are mounted side by side to permit the Justin to see in 3D. “Depth information is computed using triangulation between the two views,” Nehemiah explains.

Although most machine vision today is used in 2D applications, technology is evolving such that 3D applications are becoming more cost-effective. “3D imaging devices such as stereo cameras, structured light 3D imagers, and Lidar have matured substantially over the last few years,” Nehemiah says. “This has led to a rapid acceleration in the development of computer vision algorithms that process 3D data.”

Sponsored Recommendations

Wireless Data Acquisition System Case Studies

Wireless data acquisition systems are vital elements of connected factories, collecting data that allows operators to remotely access and visualize equipment and process information...

Strategizing for sustainable success in material handling and packaging

Download our visual factory brochure to explore how, together, we can fully optimize your industrial operations for ongoing success in material handling and packaging. As your...

A closer look at modern design considerations for food and beverage

With new and changing safety and hygiene regulations at top of mind, its easy to understand how other crucial aspects of machine design can get pushed aside. Our whitepaper explores...

Fueling the Future of Commercial EV Charging Infrastructure

Miguel Gudino, an Associate Application Engineer at RS, addresses various EV charging challenges and opportunities, ranging from charging station design strategies to the advanced...