The application of artificial intelligence (AI) in automation technologies already spans a wide scope, even though the technology is still very much in its early stages. From guiding autonomous mobile robots and dramatically improving quality inspection to control logic, food safety, and predictive maintenance, it’s clear that AI will be playing a critical role in automation for the foreseeable future.
If you haven’t started applying AI to any of your operations, smart cameras can be a good way to begin using AI-enabled automation. Chia-Wei Yang, director of the Business and Product Center, IoT Solution and Technology Business Unit at AdLink, a supplier of automation technologies ranging from edge computing and industrial PCs to machine vision and network appliances, suggests three key applications for AI-enable smart cameras: safety, operator efficiency, and quality.
When it comes to worker safety in industrial environments, light curtains are among the most frequently used technologies. These devices protect personnel from injury by creating a sensing screen that guards machine access points and perimeters.
However, they occupy lots of floor space, can be difficult to deploy, and lack flexibility,” said Yang. “And, in some instances, the safety light curtain's limited response time may create additional issues.
Conventional machine vision systems using IP cameras and AI modules also often have considerable latency issues, rendering them less than ideal in applications requiring an immediate response.
AdLink's Neon-2000 series AI smart camera has been designed to address this latency problem. “It captures images and performs all AI-related operations before sending results and instructions to related equipment, such as a robotic arm,” said Yang. “The real-time machine vision AI of the Neon-2000 series offers additional benefits to augment worker safety by alerting users if they enter an unsafe zone and logging that information for retraining purposes. For example, if a worker approaches a hazardous area, instead of the robotic arm shutting down completely, it could go into a functional safety process loop. Routines such as these not only improve worker safety but also increase the factory's operating efficiency.”
|Watch this video to learn more about the explosion of artificial intelligence use in automation technologies.|
In manufacturing, cycle time is a key aspect of production efficiency, as it represents the amount of time spent to produce an item until the product is ready for shipment.
Yang said that using AI smart camera technology to monitor employee behavior and position helps enforce standard operating procedures and improve worker efficiency, thereby reducing cycle time. Often referred to as “pose tracking” or “pose detection,” this term describes notation of a body's position and movement with a set of skeletal landmark points, such as a hand, elbow or shoulder.
“Pose detection from live video enables the overlay of digital content and information on top of the analog world,” said Yang. “AI machine vision enables factory operators and workers to focus on how physical positions affect their work. Pose data is a great training tool for guidance on where operators should place their arms and hands to work more ergonomically and efficiently.”
He adds that tracking whether an operator is present at their workstation on the production line also automates and verifies timesheets. “Monitoring that they are actively following the standard operating procedures ensures quality control and line balancing,” Yang noted.
Manual product quality inspection is time-consuming, often inconsistent, and can ultimately create bottlenecks in the production line. Conventional automated optical inspection (AOI) machine vision can detect easy-to-find defects faster than humans, but when a fault is difficult to detect—such as a flaw on a contact lens—these machine vision systems reach their limits in terms and accuracy and consistency, said Yang.
“Because contact lenses are transparent, implementing machine vision-based detection has historically been a challenge for the industry,” he explained. “Conventional AOI relies on fixed geometric algorithms to discover defects, but acquiring quality images from transparent objects is challenging, which results in unacceptable detection performance. Collecting data using AI smart cameras to train the AI algorithms and iterate on inspection performance gains is a better approach. The AI smart system can identify the most common defects, including burrs, bubbles, edges, particles, scratches, and more, as well as maintain inspection logs for customer reference.”
Yang note that AI smart cameras can inspect 50x more lenses than manual visual inspection, with accuracy improvements ranging from 30% to 95%.
According to Yang, the AI machine vision applications he noted above require AI algorithms for deep learning. “The software experts that develop AI algorithms need a smart, reliable platform for executing AI model inferencing,” he said. “AI smart cameras with pre-installed edge vision analytics (EVA) software address many issues common to conventional AI vision systems, improve compatibility, speed up installation, and minimize maintenance issues.”
He added that EVA also help shorten smart camera deployment time.
“It may take engineers as long as 12 weeks to conduct a proof of concept (PoC) for an AI vision project, because it takes considerable time to overcome the learning curve of choosing optimized cameras and the AI inference engine to be used, retraining AI models, and optimizing video streams,” he said. “However, EVA software simplifies these steps with its pipeline structure and shortens the PoC time by up to 2 weeks.”