How Embedded Vision Can Improve Inspection and Inventory Tracking

From providing new methods of improving product tracking to eliminating the bottlenecks often encountered with visual inspection, embedded vision technologies are poised to change the game in many core manufacturing applications.

A pharmaceutical packaging line uses vision-guided robots to quickly pick syringes from conveyer belts and place them into packages. Source: Embedded Vision Alliance
A pharmaceutical packaging line uses vision-guided robots to quickly pick syringes from conveyer belts and place them into packages. Source: Embedded Vision Alliance

There’s always been a great deal of interest in vision system technology in the manufacturing industries. Though the technology has been widely applied, some would argue that vision system technology application never quite reached critical mass in manufacturing because of the high costs associated with the purchase and installation of these systems. However, the plummeting cost of embedded technologies has greatly increased the opportunity to adapt embedded vision technologies into various manufacturing processes. Many of these applications have involved cutting-edge robotic applications, but embedded vision systems are equally adept for basic manufacturing applications such as inventory tracking and inspection.

When it comes to inventory tracking, most people think of using bar codes and RFID tags to track and route materials. But, as Yvonne Lin of Xilinx notes, those technologies cannot be used to detect damaged or flawed goods.

“Intelligent material and product tracking and handling in the era of embedded vision will be the foundation for the next generation of inventory management systems, as image sensor technologies continue to mature and as other vision processing components become increasingly integrated,” says Lin. “High-resolution cameras can already provide detailed images of work material and inventory tags, but complex, real-time software is needed to analyze the images, to identify objects within them, to identify ID tags associated with these objects, and to perform quality checks.”

Lin points out that her use of the term “real time” in this instance refers to an embedded vision systems’ capability of evaluating dozens of items per second.

At the core of embedded vision processing is a combination of software flexibility and hardware acceleration capabilities to address challenging system performance requirements while still allowing for algorithm optimization and evolution, Lin says. The need for these capabilities helped drive the development of Xilinx’s Zynq-7000 all-programmable system on chips (SoCs). These SoCs combine a dual-core, 32-bit ARM Cortex-A9 microprocessor and a programmable logic array on a single chip, notes Lin. “The processor cores can run complex image analysis and video analytics routines, while the closely coupled programmable logic fabric implements high-speed algorithms, including lens correction and calibration, image preprocessing, and pattern recognition.”

Of course, use of SoCs is not the only way of handling vision processing. Lin says that other potential vision processing solutions include general-purpose CPUs, special-purpose imaging processors, DSPs, GPUs, and multi-core application processors.

“To meet the vision application’s real-time requirements, various tasks must often run in parallel,” she says. “On-the-fly quality checks can be used to spot damaged material and can be used to automatically update an inventory database with information about each object and details of any quality issues. Vision systems for inventory tracking and management can deliver robust capabilities without exceeding acceptable infrastructure costs, through the integration of multiple complex real-time video analytics extracted from a single video stream.”

The same capabilities embedded vision systems provide for inventory tracking can also be applied to automated inspection.

“Vision has many uses and delivers many benefits in automated inspection, performing tasks such as checking for the presence of components, reading text and barcodes, measuring dimensions and alignment, and locating defects and patterns,” says Carlton Heard of National Instruments. “Historically, quality assurance was often performed by randomly selecting samples from the production line for manual inspection, with statistical analysis then being used to extrapolate the results to the larger manufacturing run. This approach leaves unacceptable room for defective parts to cause jams in machines further down the manufacturing line or for defective products to be shipped. Automated inspection, on the other hand, provides 100 percent quality assurance.”

Heard adds that advances in vision processing performance, due to the increasing power of embedded technologies as well as their programmability via FPGAs, mean that automated visual inspection is often no longer the limiting factor in manufacturing throughput it was once considered to be.

To bring about automated inspection in your operation using embedded vision technologies, it’s important to realize that the vision system is just one piece of a multi-step puzzle and must be synchronized with other equipment and I/O protocols in order to work well within an application, Heard notes.

For example, consider a common inspection scenario that involves the sorting of faulty parts from correct ones as they transition through the production line. “These parts move along a conveyer belt with a known distance between the camera and the ejector location that removes faulty parts,” explains Heard. “As the parts migrate, their individual locations must be tracked and correlated with the image analysis results to ensure that the ejector correctly sorts out failures. Multiple methods exist for synchronizing the sorting process with the vision system, such as the use of timestamps with known delays, and proximity sensors that also keep track of the number of parts that pass by. However, the most common method relies on encoders. When a part passes by the inspection point, a proximity sensor detects its presence and triggers the camera. After a known encoder count, the ejector will sort the part based on the results of the image analysis.”

Here’s where the advances in embedded vision technologies come into play, as the challenge with this technique is that the system processor must “constantly track the encoder value and proximity sensors while simultaneously running image processing algorithms to classify the parts and communicate with the ejection system,” Heard says. “This multi-function juggling can lead to a complex software architecture, add considerable amounts of latency and jitter, increase the risk of inaccuracy, and decrease throughput.”

High-performance processors such as FPGAs help address these issues by providing a hardware-timed method of tightly synchronizing inputs and outputs with vision inspection results.

Editor’s note: The information in this article from Xilinx and National Instruments were provided to Automation World via the Embedded Vision Alliance. Other recent Automation World articles referencing the Embedded Vision Alliance can be found in the following articles:

Companies in this article
More in Data