How Artificial Intelligence Works in Quality Control

The use of artificial intelligence has been growing across the industrial automation space for the several years now. But do you really understand what it does to affect outcomes?

Few areas of industrial technology today remain untouched by artificial intelligence (AI). From controllers to ERP to food safety and robots, AI is changing the technologies we use to run manufacturing and processing facilities in subtle and not-so-subtle ways.

One application with a big potential to benefit from AI is quality control. The use of smart cameras and related AI-enabled software is helping manufacturers achieve improved quality inspection at speeds, latency, and costs beyond the capabilities of human inspectors. And the timing of the arrival of these smart camera technologies is fortuitous, given the social distancing requirements of COVID-19.

Of course, manufacturers have been using machine vision in quality applications for many years now. But the addition of deep learning-enabled quality control software represents a departure from earlier machine vision technologies.

To help understand how AI is changing machine vision, Anatoli Gorchet, co-founder and chief technology officer at Neurala (a supplier of AI vision software), explains the process behind traditional industrial inspection processes using machine vision. 

Anatoli Gorchet, co-founder and chief technology officer at Neurala.Anatoli Gorchet, co-founder and chief technology officer at Neurala.

The first step involves an expert deciding which features (such as edges, curves, corners, color patches, etc.) in images captured by a camera are relevant to the inspection. Then, the expert creates a rule-based system that details, for example, how much “yellow” and “curvature” classify an object as a “ripe banana” in a packaging line. The resulting system, based on the expert’s input, automatically decides if the product is what it is expected to be.

Though this method has been very effective, there are some cases in which it renders machine vision ineffective. “For example, instances where the difference between good and bad products is highly qualitative, subtle, or variable can be hard to detect,” said Gorchet.

This is where AI comes into the picture. Rather than having the machine vision system rely on the rules created by the expert, the AI-powered software can learn which aspects are important on its own and create rules that determine the combinations of features that define quality products.

“With neural network learning algorithms, users no longer need to handcraft a machine vision model for every production scenario,” said Gorchet. “They just need to collect the proper data—whether it’s for fruits, airplanes parts, or ventilators valves—and train the model with it.”

The type of AI model Gorchet is referring to here is known as “deep learning.” These deep learning systems, such as deep neural networks (DNNs), are trained in a supervised fashion to recognize specific classes of things. In a typical inspection task, a DNN might be trained to visually recognize, for example, a ventilator valve, based on pictures of good and bad ventilator valves. 

“Once these pictures are collected, a typical deep learning system has a training regimen that, when fed a good quantity and variety of data, trains a model that ends up being really good at coming up with precise, low error, confident classifications,” said Gorchet.

Of course, if the line switches to a different part or product, the data collection, training, and deployment must be conducted again to develop a new model. 

To streamline this process, a new type of DNN is being explored for industrial quality inspections. These DNNs are known as “continual” or “lifelong” learning DNNs (L-DNNs). These L-DNNs, according to Gorchet, separate feature training and rule training to add new rule information on the fly. 

“Like conventional DNNs, they need a slow learning of features based on a large balanced set of data—which includes equal amounts of images of good valves as well as every possible type of defective valve; but unlike conventional DNNs they do not include rule learning at this stage and therefore do not require images of all known valve defects,” he said. “In fact, the images do not even need to be of valves as long as they possess the similar features: curves, edges, surface properties. This data set can be quite generic and does not have to be industry-specific. This means that the model creation can be done once by the L-DNN provider and does not need to concern the manufacturers at all.” (Editor’s note: Neurala is a supplier of L-DNN technology.)

This means that manufacturers only need a small set of images of good valves for the system to learn a set of rules of what a good valve is. L-DNNs, explained Gorchet, can learn on a single presentation of a small dataset using only good data and then advise the user when an atypical product is encountered. “A training regimen of an L-DNN can go over a set of tens of images, build a prototypical understanding of the object, and be ready to be deployed and reconfigured if and when production changes,” he said.