Inspection Capabilities are Looking Up

The Kendall-Jackson winery packages more than 3 million cases of wine each year, so it’s a big challenge to position labels on bottles that race through its production lines.

The Kendall-Jackson winery packages more than 3 million cases of wine each year, so it’s a big challenge to position labels on bottles that race through its production lines. Compounding the challenge, bottles emerge from the labeling stage in a random position. The Fulton, Calif., winery solves its problem with machine vision, which helps increase efficiency in bottle inspection. That’s a growing trend in California wine country, which ranks behind only France, Italy, and Spain with production in excess of 500 million gallons of wine per year.

Kendall-Jackson handled its labeling problem with help from CIVision LLC, of Aurora, Ill. Its 360 Full View inspection system checks the labels after they have been placed on the bottles. Four cameras simultaneously examine a bottle as it leaves the labeling space. The application highlights the many changes occurring in vision, as prices come down and sophistication rises. Lighting remains a key issue—CIVision employs eight fluorescent tubes to provide enough light for four Basler GigE Scout cameras. The GigE nomenclature for those cameras, which stands for Gigabit Ethernet, highlights the networking technology that is making it simpler to run multi-camera systems at very high speeds.

The system is personal computer (PC)-based, though many companies employ smart cameras, which house processors so they don’t need PCs to analyze images and decide whether parts pass or fail. These less-expensive smart camera systems are enabling smaller companies to employ vision—which is helping drive machine vision revenues up solidly. Analyst firm Frost & Sullivan predicts that the vision inspection market will rise from $2.3 billion in 2006 to $3.7 billion in 2013.

Regardless of what vision techniques are being used, vendors and integrators say there are a number of benefits to be gained. “To guarantee zero defects, you need inspection. You need vision systems to tell you the system’s going out of calibration. If you wait until you realize you’re out of spec, it’s too late,” says Charles Magnan, vice president at Averna Vision & Robotics Inc., an integrator and equipment designer based in Montreal.

Others note that the cost of vision systems pales in comparison to the potential costs of shipping defective products. “Security and traceability have become huge. If a product causes a problem, like contaminated consumables or a jet going down because of a faulty circuit board, the losses to the manufacturer can be huge,” says Kevin M. Malliet, vice president, sales & marketing, at integrator International Product Technology Inc., of New Berlin, Wis. “A vision system is like an insurance policy to help prevent that.”

Speedy cameras

 The cameras that determine the speed and performance of a system are evolving almost as rapidly as the electronic controls that make decisions based on camera input. Some of those cameras are combining both aspects, putting intelligence in the camera. Both conventional camera providers and systems companies such as National Instruments, an Austin, Texas, automation supplier, are offering these smart cameras.

These integrated systems often provide lower prices and simpler installation than conventional PC-based hardware. Installers don’t have to find space for controllers and their cables. Some integrators say that these integrated cameras offer better performance than many PC-based offerings. “Some high-speed jobs don’t afford the time to send data to the PC and back. If you’re checking something at 50 per second, you don’t have many milliseconds to react to each one. Without smart sensors, it’s not feasible,” Magnan says. With smart cameras, analysis and decisions can be made without the lag time of sending data to the PC and back.

However, he notes that there’s a limit to the capabilities of these space-constrained packages. They don’t have the memory or peripheral boards that let PC-based systems perform more complex analyses of the images. “Trying to run a complex algorithm on a smart sensor will slow it down,” Magnan says.

For more complex tasks that are often tied to higher speed manufacturing, most users will turn to conventional packages. “The industry has gone toward smart cameras, but they only offer so much processing power. We use PC-based systems that can run algorithms at blistering speed,” says Greg Raciti, engineering manager at Faber Associates, a Clifton, N.J., system integrator.

These powerful controllers can be linked to the latest generation of cameras, which run far faster than their predecessors.

“Dalsa’s newest cameras run at ungodly speeds, 160 frames per second. Our original systems ran at 30 frames per second, until recently they ran at 60 frames per second with 640 x 480 images,” Raciti says.

Those advances are driven largely by advances in the imaging chips that are the basis of digital cameras. The continuing evolution of semiconductor processes and the investments made in commercial camera chips are driving significant progress in vision systems.

“We’ve just unveiled an architecture that provides color with the same light sensitivity as monochromatic image sensors,” says Michael DeLuca, product manager for Kodak’s Image Sensor Group. That technology, which offers a 2- to 4-times increase in sensitivity to light, can be added to both CCD (charge-coupled device) and CMOS (complementary metal oxide semiconductor) imagers, he adds.

Though CCDs now dominate vision applications, DeLuca notes that the gap between the two techniques is narrowing. “We’re producing CMOS imagers with multiple outputs for 120 to 240 frames per second and the high quality associated with CCDs. That’s a factor of two better than previous models. We achieve that by reading data off the sensor in parallel and designing sensors so they can be read faster,” he adds.

Focusing on lighting

 One of the biggest challenges for any company that implements vision is illuminating the object being scanned. “Ninety per cent of the challenge is related to lighting. You can measure almost anything if you can see it,” Malliet says.
That’s often because labels and parts are moving extremely fast or have surfaces that are difficult to photograph. “Lighting will always be a black art. When you’re inspecting reflective surfaces and trying to see fine details, you need strong lighting to spot defects,” Magnan says. The machine vision industry is already benefiting from the LED (light-emitting diode) revolution that began occurring once white LEDs made it onto the scene. LEDs are small enough to position in areas close to the camera or the scene being imaged. They now have enough output to light scenes with just a few LEDs.

Changing bulbs is also rare with LEDs, because their lifetimes are 50,000 hours or longer. Lifetimes of both LEDs and conventional lights can be extended by using strobe controls, which are also evolving rapidly. “With the evolution of LEDs and strobe controllers, lighting is becoming less of a problem. With a strobe, you can output three times as much light for short periods without worrying about burning out your equipment,” Raciti says.

Vision vendors and integrators are responding to the trend toward custom manufacturing by designing systems that are simpler to program and set up. That’s in response to the constant changes that occur as manufacturers adapt to changing requirements, just-in-time deadlines and other changes. Most companies no longer produce long runs without any changes. Even when firms make the same product, they will often switch labels for generic versions and other brands. “There are a lot of changes in packaging lines, they’re not like lines that build the same turbine blade for years,” Magnan says.

Those changes also occur in fields such as electronics. There, a different kind of packaging—the housings that hold silicon chips—often changes as chipmakers produce devices for different applications, sometimes processing three different types of packages in an hour. Vision systems in this field must quickly examine parts, then provide reasons for rejecting those that fail.

“In semiconductor inspection, tools are getting more sophisticated. People want inspection tools that give them a lot of detail, so they can see why a part is bad so they can change their processes and handling,” Malliet says. “We never know what parts we’ll be looking at, but we do know operators don’t want to alter their inspection machines.”
Vision systems must also adapt to production lines, especially in the many factories where this inspection technology is added to existing production lines.

At Kendall-Jackson, attaching cameras to existing equipment was a necessity. “Building a system that followed an existing process meant that we could use the existing conveyors and equipment,” says Scott Stone, marketing director at CIVision.

Unfortunately for integrators, it’s often no simpler to add vision to an existing line than to a new installation. That’s because many manufacturing executives worry about production, leaving inspection until a line’s design is complete. “Many people don’t think about vision until their lines are already late. Often, if they called early on, we could put in a hole for a camera. When it’s late, you have to redesign things to fit the camera in,” Magnan says.

Many views

 As camera costs come down and production line speeds rise, a growing number of end-users are opting for vision systems that have multiple cameras. In the Kendall-Jackson facility, four cameras take different images, and the system patches them together to check the entire label. 

That requires more than simple processing power. The network that links these cameras to the host PC has to transfer large video files at speeds fast enough to give the controller time to make a decision before a faulty part gets too far down the production line. The latest version of Ethernet has the bandwidth needed to transfer images from many cameras.

“With Gbit Ethernet, you’re not limited in the number of cameras you can link to the host. You’re limited by processing power. With fast processors, you can throw six cameras at it and not have a challenge,” Raciti says. He notes that cameras can now be triggered asynchronously instead of triggering them all at once, making it easier to get different views.

He predicts that this standard will take over, replacing alternative networks.  “From a cost and speed standpoint, Gbit Ethernet is the way to go. Firewire worked, but it’s overhyped. Gbit Ethernet is the wave of the future,” Raciti says. That will lower maintenance costs by extending the reach of Ethernet, which is used throughout many factories, and simplifying maintenance for technicians who no longer need to know multiple networking schemes.

Fast networks also make it simpler to attach displays that give operators insight into operations. Early vision systems often eschewed displays, but as the costs of compact LCDs has come down, more users want to see images, many integrators say.

For more information, search keywords “machine vision” at

More in Control