Using AI for Vision-Guided Robot Deployments

Jan. 3, 2024
Apera AI focuses on using artificial intelligence to provide control over robotic gripping, pick points and simulation during commissioning.

While discussions over the value of large language model artificial intelligence (AI) technologies is ongoing, one area where AI has been providing significant improvements in productivity and ease-of-use is with vision-guided robotics. 

Apera AI, a supplier of 4D vision technology for robots, says its customers have expressed interest in wanting more control over the implementation of AI into their robotic cells. In response, Apera has introduced user control over pick points, gripping strategy and simulation tools to make vision-guided robotics easier to use in an update of its Vue software.

The Vue robotic vision AI software trains robots on the parts to be handled using 1 million simulated cycles to reach 99.99% reliability. Apera says, the AI in Vue learns to understand your parts completely, so the robot can take the fastest, safest, most reliable path in and out of movements to handle the parts and understand how to avoid collisions within the operating area.

The underlying technology of the Vue software is Apera AI’s proprietary artificial intelligence called 4D Vision that is applied to its computer vision technology. 4D Vision can identify objects in 3D space and provide robotic guidance, even when objects are highly disordered. According to Apera AI, 4D Vision is “human-like sight for manufacturing. The general rule is this: If a human can see and determine an object's positioning in comparison to the environment and other objects, so can an Apera vision system.”

Standard 2D cameras are used to capture the working area of the robot. This means that continual improvements can be deployed via hardware, instead of being locked into the design of specialized 3D cameras. 

According to Apera AI, robotic cells using AI-powered vision are now “hitting human-like levels of object recognition and productivity” because the software can recognize objects in randomized bins and provide complete path planning to the robot at the same speed in which a person can make the decision—0.3 seconds.”

“We have made significant leaps in the past six months that make AI-powered vision easier to use,” said Sina Afrooze, CEO and co-founder of Apera AI. “Engineers can choose pick points on the part and define clearances around the robot and gripper. Custom end-of-arm tools can be incorporated into the vision program. And these adjustments can now be tested in a complete simulation using the fully built cell before being moved into production.”

Sponsored Recommendations

Put the Plant Floor in Your Pocket with Ignition Perspective

Build mobile-responsive HTML applications that run natively on any screen.

Ignition: Industrial-Strength System Security and Stability

Ignition is built on a solid, unified architecture and proven, industrial-grade security technology, which is why industrial organizations all over the world have been trusting...

Iron Foundry Gains Competitive Edge & Increases Efficiency with Innovative Technology

With help from Artek, Ferroloy implemented Ignition to digitally transform their disconnected foundry through efficient data collection and analysis while integrating the new ...

Empowering Data Center Growth: Leveraging Ignition for Scalability and Efficiency

Data center growth has exploded over the past decade. Initially driven by organizations moving their computer assets to the cloud, this trend has only accelerated. With the rise...