This article originally appeared on June 5, 2019.
As manufacturers across the globe increasingly automate and connect their production operations, quality has become one of the key differentiators among products and companies’ brand reputation. This reality is leading more companies to explore advanced technologies to improve their quality processes. The technology most prevalent in this area today is machine learning.
We recently covered Frito Lay’s use of machine learning to test the quality of its chips and streamline its potato weighing processes. Now we’ve learned that Motorola is applying the technology to its mobile phone production operations.
Motorola is using machine learning software from Instrumental to discover design and production issues faster, strengthen quality control on the line, and streamline their issue response to deliver new products on demanding schedules.
The role of vision
In January 2018, Motorola began working with Instrumental by first identifying a handful of mobile phone assembly states that highlighted all of the key components of the phone as it was built. With this information in hand, Instrumental built and deployed inspection stations consisting of cameras, tunable lighting, and customized fixtures in less than three weeks.
Explaining the use of cameras in these inspection stations, Anna-Katrina Shedletsky, CEO of Instrumental, said, “Vision in industry is used very specifically, for example, to measure a gap; but this is a different use of vision. Because we were going where there wasn’t any [pre-existing] vision system, we built a low-cost station using a 20-megapixel flir camera with no IP [intellectual property] in it and integrated it with our software to work as a test station on the line. We use the cameras to scan the bar code and take pictures while the software does the analysis in real time to give a pass/fail result.”
Another differentiator of Instrumental’s use of vision technology is that, as soon as images were collected, they were uploaded to Instrumental’s database and made available in the Instrumental web application to Motorola engineers around the world. According to Instrumental, this complete data record is a key differentiator between Instrumental and traditional industrial vision systems, where the applications must be incredibly specific and the data remains trapped on the local machine in the factory, unavailable to the team.
Dark blotches on the PCB
One of the initial production operations to which Motorola applied Instrumental’s software was on the mobile phones’ printed circuit boards (PCBs). The industry standard for PCB inspection is automatic optical inspection (AOI) systems, which compare an image of a circuit board to the digital CAD file to make sure that each part is present. Shedletsky said one limitation of AOI is that it does not find new defects or damage and cannot analyze PCBs that have undergone additional assembly steps, something that is incredibly common in modern miniaturized devices.
In this first application of Instrumental’s machine learning algorithms, dark blotches were detected on empty areas of a subset of the PCBs. When engineers examined this, they determined that the blotches aligned with buried vias (areas that connect two or more inner layers of the PCB) that make up the board circuitry itself. Armed with this insight, the engineers investigated further and found that the boards were thicker than specified—which would have created serious problems with their critical tolerance stack ups (calculations of the maximum and minimum distance between two features or parts) and could have been very difficult to track down as a source of the problem.
“These PCBs had gone through AOI and had passed—even though the blotches were clearly visible,” said Shedletsky. “Using Instrumental data, it was easy to see that the defective PCBs were all from one vendor, enabling the Motorola team to work with their supplier to correct the issue quickly.”
Shedletsky said that once the first 30 units from the build were completed, Motorola engineers could use Instrumental’s machine-learning algorithms to find new defects that they weren’t previously aware of. Once a defect was found, every subsequent unit can be setup to test for the same failure mode. Failures are then automatically sorted into collections where defect rates and trends are calculated in real-time.
“The machine-learning methods that drive Instrumental technology also enable each Monitor [Instrumental’s software app] to learn the difference between a typical and an anomalous unit. This makes it possible to set up tests that can find unforeseen defects automatically—something traditional industrial vision systems cannot do,” she added.
Explaining Monitor, Shedletsky described it as a software application that lets an engineer set up a test that runs only in the cloud. "When Monitor validates that it’s catching the things the engineers want to identify, we have a drop-down selection that will push the self-programmed algorithm into a product’s recipe at the edge," she said. "Training of the algorithm takes place in the cloud with compute done at the edge on a normal graphic card in an off-the-shelf computer.”
The self-programmed algorithm capability is a key feature of Instrumental’s machine learning technology that removes the need for a company to employ a data scientist to apply machine learning to their production operations.
“We’ve designed the software so that an engineer provides the expertise—looking at a part to determine what is defective. The engineer then visits a web app to view images and sort or filter them by key parameters or places along the line,” Shedletsky said. “Engineers tell the system where to look and then the algorithms run and return a stack rank from most anomalous to least anomalous. Users of the software can then draw a threshold between what’s bad or good and give it a name; for example, shifted part, tilted switch, etc.”
She added that Instrumental’s algorithms can find known issues as well as unknown issues. "We don’t need a failing example to set up a test—you can build a test based on all good products so that that result becomes the base on what to look for," Shedletsky explained. "We can start with as little of 30 images, whereas most machine learning systems usually requires thousands or tens of thousands of images.”