Smart Cameras vs. Multi-Camera Vision Systems and Other Choices

July 31, 2014
Industrial vision systems can improve quality or automate production, but choosing systems that match the application and ownership requirements can be confusing. Here are factors to consider when integrating vision.

Every vision system has one or more image sensors that capture pictures for analysis and all include application software and processors that execute user-defined inspection programs or recipes. Additionally, all vision systems provide some way of communicating results to complementary equipment for control or operator monitoring. But there are many types of vision systems on the market, and choosing among them can be confusing.

Christopher Chalifoux, international applications engineer for vision systems maker Teledyne DALSA, authored a whitepaper to help users make better choices and improve implementation success. He classifies vision solutions into two categories: those with a single embedded sensor (also known as smart cameras) and those with one or more sensors attached (multi-camera vision systems).

ā€œThe decision to use one or the other is dependent not only on the number of sensors needed, but also on a number of other factors including performance, ownership cost and the environment where the system needs to operate,ā€ says Chalifoux. ā€œSmart cameras, for example, are generally designed to tolerate harsh operating environments better than multi-camera systems. Similarly, multi-camera systems tend cost less and deliver higher performance for more complex applications.ā€

Another way to differentiate the two classes of systems is to think in terms of processing requirements. ā€œFor many applications, such as in car manufacturing, it is desirable to have multiple independent points of inspection along the assembly line. Smart cameras are a good choice as they are self-contained and can be easily programmed to perform a specific task and modified if needed without affecting other inspections on the line. In this way processing is "distributed" across a number of cameras,ā€ says Chalifoux.

Similarly, other parts of the production line may be better suited to a "centralized" processing approach. ā€œFor example, it is not uncommon for final inspection of some assemblies to require 16 or 32 sensors. In this case, a multi-camera system may be better suited as it is less costly and easier for the operator to interact with,ā€ he says.

The most important consideration when selecting a vision system may be the software. The capabilities of the software must match the application, programming and runtime needs. If you are new to machine vision or if your application requirements are straightforward, you should select software that doesn't require programming, includes core capabilities (i.e. pattern matching, feature finding, barcode/2D, OCR) and can interface with complementary devices using standard factory protocols, says Chalifoux.

For more complex needs and those who are comfortable with programming, look for more advanced software package that offer additional flexibility and control.

ā€œIt is important to know that there are significant and important differences between vision systems that make one more suitable than another for any given application. It is equally important to know and appreciate the importance of choosing the optimal sensor, lighting and optics for the job. Failure to do so may result in unexpected false rejects, or even worse, false positives,ā€ says Chalifoux.

The whitepaper goes into implementation factors, such as image sensor resolution, that should be considered. Image sensors convert light collected from the part into electrical signals. These signals are digitized into an array of values called ā€œpixelsā€ which are processed by the vision system during the inspection.

ā€œImage sensors used by vision systems are highly specialized, and hence more expensive than say, a web cam,ā€ says Chalifoux. ā€œFirst, it is desirable to have square physical pixels. This makes measurement calculations easier and more precise. Second, the cameras can be triggered by the vision system to take a picture based on a part-in-place signal. Third, the cameras have sophisticated exposure and fast electronic shutters that can 'freeze' the motion of most parts as they move down the line.ā€ Image sensors are available in many different resolution and interfaces to suit any application need.

Companies in this Article

Sponsored Recommendations

Food Production: How SEW-EURODRIVE Drives Excellence

Optimize food production with SEW-EURODRIVEā€™s hygienic, energy-efficient automation and drive solutions for precision, reliability, and sustainability.

Rock Quarry Implements Ignition to Improve Visibility, Safety & Decision-Making

George Reed, with the help of Factory Technologies, was looking to further automate the processes at its quarries and make Ignition an organization-wide standard.

Water Infrastructure Company Replaces Point-To-Point VPN With MQTT

Goodnight Midstream chose Ignition because it could fulfill several requirements: data mining and business intelligence work on the system backend; powerful Linux-based edge deployments...

The Purdue Model And Ignition

In the automation world, the Purdue Model (also known as the Purdue reference model, Purdue network model, ISA 95, or the Automation Pyramid) is a well-known architectural framework...