Machine Vision: Seeing is Believing

Machine vision is practical for integrated manufacturing.

As amazing as the human eye is, just about everyone knows that it’s unreliable in repetitive work, especially when the work involves judging slight differences in size. To avoid the mistakes that workers are bound to make from time to time, most manufacturers rely on automation for their quality checks these days.

Polaris Industries Inc. is no different. At its welding shop in Spirit Lake, Iowa, this manufacturer of motorcycles and side-by-side utility vehicles has had a long history of using touch and proximity sensors for checking parts in its welding fixtures. It uses these sensing techniques to verify that the operators have populated the fixtures with the right combination of components before welding them in the shop’s 15 robotic arc-welding cells. Recently, however, the company installed a vision system and discovered what other manufacturers have found—that today’s vision technology can be reliable “eyes” for integrated manufacturing.

The longstanding obstacle to machine vision at Polaris’ welding shop has been the environment. Historically, vision has not done well in dark and dirty operations that generate smoke. The perception at the shop was that the investment in lighting, programming, and keeping the environment and equipment clean was simply too great.

That changed, however, when welding engineer Jeff Steiner heard from Omaha, Neb.-based distributor Hartfiel Co. about vision technology from Dalsa Corp.’s Intelligent Product Division, in Billerica, Mass. Upon learning that the technology could check set-ups within 200 milliseconds (msec), Steiner wanted to see how this might streamline the inspection of loaded fixtures, which had been a bottleneck in a cell welding chassis components for Ranger utility vehicles.

The cell uses a work-changing device that resembles a two-seat Ferris wheel. The seats are opposite each other, such that one seat is in front of the welding robots while the other is facing the operator for loading. Each station has a pool-table-size fixture that is flexible enough to accommodate a family of frames that go into the different models in the Ranger line. Some of the parts in this family look similar, but vary slightly in size. Although the engineering staff has guarded against many loading errors by designing the parts to be as unique as possible, and the fixtures to hold them in only one orientation, this precaution had its limits. It was still possible for the operator either to forget a part, or, in certain locations, to insert the incorrect size into the fixture.

To ensure that the right parts were inser-ted, the cell’s two SK6 six-axis robots from Motoman Inc., of West Carrollton, Ohio, provided another line of defense. They checked the parts with touch sensors before welding began. Although touch sensing did the job fairly well, it required as much as 30 seconds of cycle time to check each part, which adds up to significant time after thousands of parts.

Productivity would suffer even more whenever the robots would find a problem. In these instances, the operator would have to take the time to retract the fixture, correct the problem, return the fixture to the welding position in front of the robots, and run the inspection routine again. Only then could production resume.

Justifying electronic eyes

The vision system improved production by relieving the robots of their touch probe inspection burden. Two Dalsa cameras are mounted on a frame built over the parts bins that are behind the operator as he faces the fixture that he is loading. While the robots are welding on one side of the two-station work changer, two stationary cameras inspect the set-up after the operator finishes populating the fixture. Not only does this arrangement conserve cycle time, but it also keeps the cameras away from the heat, spatter and fume produced by the welding process.

Another way that Dalsa’s iNspect software boosts efficiency is to display a color-coded picture of the parts on a monitor, green for good parts and red for bad ones. “The operators can identify any problems on their own, make the correction and try cycling it in again,” says Steiner. Consequently, the combination of relieving the robots of inspection duty and giving the operators the ability to check themselves reduced cycle time by 12.5 percent, which alone justified the project.

Besides increasing production, the software reduces scrap production in two ways. The first is by preventing the operator from running the wrong welding program. “Our controls engineer tied the vision system to the ladder logic of the Motoman MRC controller,” explains Steiner. Consequently, the vision software can compare the selected welding program to the combination of parts loaded into the fixture and alert the operator when the two don’t match.

The software’s second contribution to scrap reduction is that it cuts in half the scrap due to incorrectly loaded parts. “There is one part that we don’t catch 100 percent of the time,” says Steiner. He blames this on the fact that he didn’t install lighting. Instead, the vision system relies solely on the shop’s fluorescent lights. Although Steiner admits that he might have been able to find lighting that would have eliminated the problem altogether, it would have required more hardware, programming and upkeep that would have made the project impossible to justify.

Even with this compromise, the vision system is still faster and more effective than the touch probe. “It was the lowest-cost solution that gave us the most uptime,” says Steiner.

Those who want 100 percent inspection, however, have to take the time to specify the right lighting. “Lighting is 90 percent of vision applications,” says Curt Bonar, president at Dial-X Automated Equipment Inc., of Albion, Ind. “The correct lighting not only makes programming the camera easier and more efficient, but also makes the results more consistent.”  This is not to say that ambient lighting is not important. One must factor it into the calculations as well, because it too can affect the quality of the pictures.

For this reason, Dial-X’s engineering staff pays a lot of attention to lighting on the dial-type rotary machines that it designs and builds for assembling high-volumes of parts that are big enough to assemble by hand. Bonar reports that, despite the many variables that one must consider, specifying lighting has become easier because of the variety of choices available these days. Besides the standard white lights that have been around from the beginning, there are also light emitting diode (LED) lights in a range of colors that help with illuminating certain features and dealing with different colors.

Keeping it simple

Another reason that machine vision is easier to specify is the cost and simplicity of the cameras themselves. Helped by the general trend in electronics toward more capability in smaller and cheaper packages, a number of vendors have developed a kind of camera called a vision sensor. “A vision system in the past would be a dedicated system using a computer and maybe costing as much as $100,000, and having a person who is dedicated full time to keeping it going,” says Brent Evanger, applications engineer at automation components and vision vendor Banner Engineering Corp., in Plymouth, Minn.

Vision sensors, on the other hand, are self-contained units that usually cost between $500 and $2,000 and that can be programmed and run independently of a personal computer (PC), although most can be connected to one. Some are programmable with buttons, but others, such as Banner’s iVu image sensor, have touch screens on one side. Evanger compares these sensors to iPods and smart phones.

Dial-X often fits its assembly machines with such sensors from Banner when users want in-line inspection. At the various stations on its machines, vibratory bowl feeders and other devices feed parts to the machine, and robotic pick-and-place units put the pieces together. Inspection technologies, such as mechanical check gages or vision sensors, check incoming parts for defects and verify that the machine has inserted them correctly into the assembly.

Although mechanical gages are an inexpensive way to verify whether a part is there or is seated correctly, they add a few seconds worth of cycle time to an operation, and cannot look for defects. A vision sensor, on the other hand, not only can perform these checks but also can take measurements within a millisecond.

It also can be integrated with process control. Consider an application in which the machine had to orient a round part that had a pattern of small holes at its center. “There were no features on the outside to help us to orient the part mechanically,” recalls Bonar. “So, we tied a Banner camera to a servo.”

In this machine, parts fall onto a glass landing as they emerge from a vibratory bowl feeder. A light shining beneath the glass and a ring light around the camera permit the vision sensor to find the part and see its holes. The software then determines the orientation of the holes and tells a servo-driven unit how much to rotate the part for placement on the next component.

At the deep end

Vision sensors are becoming more deeply embedded in the integrated manufacturing movement. High-end vision systems have proliferated, as a result of ever-growing computing power and better digital cameras, lasers and lighting technology. One ramification of this power is that vision suppliers have been able to simplify programming by capturing a fair amount of their applications expertise in software.

“We have programmed 150 functional modules,” says Dwight Carlson, chairman of Coherix Corp., an Ann Arbor, Mich.-based vision supplier that specializes in three-dimensional (3D) technology. “Today, a trained technician can use a simple drag-and-drop technique to configure a sophisticated machine vision system in a matter of a few hours. So, the cost of developing applications has plummeted.”

Another advantage of greater computing power is that the processors can crunch much more data faster to do more advanced inspections, such as laser-based digital holography. “Classical machine vision needs some feature, whether it’s a lead, ball or bracket,” says Carlson. An example of a featureless job suited to holography is measuring the flatness of an engine head in-line. Holography can do it to a micron within 20 to 40 seconds.

Coherix and others also exploit today’s computing power to process data from multiple stereo cameras to perform 3D inspection. This technology is useful for looking for cracks and chips in semiconductor conductor packages and for checking whether their leads are coplanar and will touch contacts at the same time.

Another application is the pallet inspection system developed by Nagle Research, an engineering firm in Cedar Park, Texas. The firm was asked to automate the grading of wooden shipping pallets, giving those with cracks or broken slats a lower grade than those without. The firm’s engineers would have to encode the intelligence necessary to recognize protruding nails, loose boards, and cracked or broken boards. The point was to detect the many small defects that can go unnoticed by human inspectors.

A 3D camera was necessary to capture the geometry of the pallet. Simply looking for changes in color or contrast, as 2D vision would, would not work here because the material is wood. Not only is contrast low between the nails and boards, but the color and grain patterns also vary greatly. “Every pallet is like a fingerprint in that no two are alike, even new ones,” notes John Nagle, president. For these reasons, 2D machine vision tends to give too many false positives.

Another problem is that the color of wood has little to do with the condition of the material. “For example, grain patterns are very difficult for a 2D system to distinguish from actual cracks,” says Nagle. A 3D system avoids these problems by looking at the geometry of the wood, rather than its color or contrast.

For this job, the engineers selected the Ranger 3D camera from Sick Inc., in Minneapolis, because it takes more than one type of image. It provides range or height data, a 2D high-resolution line scan, and a scatter image based on laser light spread along the surface. “The scatter image enables defects and cracks on the pallet to be spotted before a major defect is visible to the human eye,” says Jim Anderson, Sick’s vision product manager.

The multi-camera system analyzes the data in about a half-second per pallet. Not only does it give consistent results, but it also records the important metrics as the pallets are inspected, making it an essential set of eyes for identifying systemic defects and estimating material consumption.

Subscribe to Automation World's RSS Feeds for Feature Articles

More in Control