3 Machine Vision Technology Trends

Feb. 7, 2023
New sensors and greater incorporation of artificial intelligence are not only improving machine vision capabilities, they’re also making them easier to implement and operate.

As machine vision systems improve via advances in chip technologies, easier to use software, and lower cost, IoT Analytics (a provider of market insights and business intelligence) took a look three specific technology developments it sees as having the biggest impact on machine vision technology and applications today. According to IoT Analytics, users of machine vision technologies should make note of these trends as they are the main drivers behind machine vision systems’ increasing power and ability to deliver a proven return on investment.

Technology Advance #1: Advanced cameras

With many machine vision cameras featuring resolution of more than 45 megapixels, these cameras can capture objects at extremely high speed without distortion. Another advance powering new machine vision capabilities are event-based vision sensors.

According to IoT Analytics, these sensors process images similarly to how the optic nerve in the human eye processes information. More specifically, these event-based vision sensors detect changes in brightness of each pixel. This capability enables machine vision to be used in much darker environments than traditional frame-based vision sensors, in which a complete image is output at intervals determined by the frame rate, according to Sony, a supplier of machine vision sensors. 

Get end user and integrator insights on AI and machine vision sensors.

Sony Semiconductor Solutions Corporation recently released two types of stacked event-based vision sensors designed for industrial use. The IMX636 and IMX637 sensors feature low power consumption and deliver high-speed, low-latency, high-temporal-resolution data output. According to the company, these sensors have also delivered the industry’s smallest pixel size of 4.86μm. 

Technology Advance #2: Artificial intelligence

The incorporation of artificial intelligence (AI) into machine vision applications has been one of the prime accelerators of industrial machine vision technology over the past few years.

Whereas rule-based machine vision proved useful in identifying quantifiable, clear, and very specific characteristics to answer yes or no questions (e.g., presence or absence), AI-based machine vision “can provide accurate results for non-quantifiable characteristics, discern defects in a wider range of backgrounds and lighting settings, and work flexibly with variations in product appearance and types of defects (e.g., dents or discoloration),” according to IoT Analytics.

An example of this can be seen in the work being done by Neurala, an AI technology company, and Flir Systems, a well-known supplier of imaging cameras and sensors, to deliver an AI-based industrial imaging system.

How AI works in quality control applications.

According to Flir and Neurala, this new imaging system allows users to “rapidly create deep learning models using Neurala’s Brain Builder on the VIA platform with little data and no AI expertise. These models can be directly uploaded to a Flir Firefly DL camera using the free Flir Spinnaker software development kit.”

Because the models can be deployed directly onto Flir Firefly DL camera, the companies claim an intelligent, automated inspection point can be placed practically anywhere in-line and quickly reconfigured for new applications.

Technology Advance #3: Automated training

This advance could be included under advance #2 above, as it is also a result of AI. However, it deserves a separate category since it not only improves the capability of the camera, but the experience for the user. In this case, we’re talking about the incorporation of deep learning AI into machine vision to “train” cameras faster than ever before.

 Not long ago, training of machine vision cameras to detect flaws in a part or product required the vision system to be presented with hundreds of images of both acceptable and flawed products for it to be able to effectively determine the difference. New chips, such as the Neon-2000-JNX series from AdLink Technology with the Nvidia Jetson Xavier NX module built in, can process images and run AI-based computer vision algorithms. This has reduced the vision system training times from weeks to hours.

Rather than having the machine vision system rely on the rules created by an expert, AI-powered machine vision software can learn which aspects are important on its own and create rules that determine the combinations of features that define quality products.

“With neural network learning algorithms, users no longer need to handcraft a machine vision model for every production scenario,” says Anatoli Gorchet, co-founder and chief technology officer at Neurala. “They just need to collect the proper data—whether it’s for fruits, airplane parts, or ventilator valves—and train the model with it.”

Companies in this Article

Sponsored Recommendations

Strategizing for sustainable success in material handling and packaging

Download our visual factory brochure to explore how, together, we can fully optimize your industrial operations for ongoing success in material handling and packaging. As your...

A closer look at modern design considerations for food and beverage

With new and changing safety and hygiene regulations at top of mind, its easy to understand how other crucial aspects of machine design can get pushed aside. Our whitepaper explores...

Fueling the Future of Commercial EV Charging Infrastructure

Miguel Gudino, an Associate Application Engineer at RS, addresses various EV charging challenges and opportunities, ranging from charging station design strategies to the advanced...

Condition Monitoring for Energy and Utilities Assets

Condition monitoring is an essential element of asset management in the energy and utilities industry. The American oil and gas, water and wastewater, and electrical grid sectors...