How AI Is Making Robots More Autonomous and Adaptable

May 14, 2024
Teradyne Robotics’ work with Nvidia is expected to bring about an inflection point in technology that will enable robots to operate more precisely in unstructured and changing environments and scale new robotic applications at companies of all sizes.

Two things that have been hard to miss lately when it comes to industrial automation technologies are artificial intelligence (AI) and Nvidia. Not surprisingly, the two are closely connected as Nvidia is now tightly linked with emerging AI tech as the company grows beyond its GPU (graphics processing unit) roots, while extending its GPU tech into the digital twin space. 

Automation World recently reported on Nvidia’s collaboration with several industrial automation companies such as Aveva, Rockwell Automation, Siemens and Schneider Electric as well as Teradyne Robotics and its MiR and Universal Robots companies. At the Automate 2024 event Chicago, we had the opportunity to meet with Ujjwal Kumar, president of Teradyne Robotics, and Gerard Andrews, director of product marketing—robotics at Nvidia, to get insights into Nvidia’s AI-based mobile robot stack, how it’s being applied on MiR’s new 1290 Pallet Jack and how Nvidia’s Jetson Edge AI module is used with Universal Robots’ cobots and vision systems for quality inspections. 

Dealing with differentiation

Speaking as the leader of an advanced robotics companies with one of the largest installed bases of collaborative and autonomous mobile robots, Kumar said a long list of problems remain for industry on which Teradyne is working with Nvidia to help solve. He pointed to autonomous pallet jacks as an example of this.

Pallets in industrial use have a lot of variation. They have paint and stickers on them as well as chipped or broken wood in places. Testing of automated pallet jacks, however, is typically done on new, near perfect pallets, which doesn’t reflect the reality of most plant floor conditions. Kumar said industry has largely been accepting of this and uses human resources to deal with the pallets that the automated pallet jacks haven’t been able to handle.

“But we didn’t want to launch just another autonomous pallet jack,” said Kumar. “We wanted to bring a fully autonomous solution to the customer. But to do that the robot needed advanced cognitive abilities—that's what we work with Nvidia on and that’s how this AI-based pallet detection system allows us to deliver this high level of pallet detection and safety based on how it detects, responds and moves.”

Kumar explained that, before the introduction of Nvidia’s AI capabilities into pallet detection applications, industry had autonomous pallet jack capabilities “with asterisks,” in that it was autonomous “as long as you had perfect conditions for the robot to work in. Now we can say it is autonomous for the real world. We know that pallets will come from around the world and they will be broken, scraped and have many imperfections. But our robots no longer look for the perfect scenario. They will work in imperfect scenarios and in unstructured environments with a lot more variability than what typical robotics solutions are capable of.”

Software stacks

Nvidia’s Andrews explained that Nvidia approaches industry’s automation challenges with three types of computing approaches. “The first is the Edge AI computer which runs on Nvidia’s Jetson platform along with a full stack of software,” he said. “Then we have our big AI in the cloud—our training computer—which most people know Nvidia for. And the third big computing contribution is our simulation computer, which allows for robot performance to be simulated in detail before it’s deployed. We have a saying that a robot lives a thousand lives in simulation before it ever sees the real world, and we believe that's the way for people to have trust in robotic solutions when they’re finally deployed.”

Nvidia also recently announced its Isaac Manipulator software stack for robot arms and Isaac Perceptor software stack for robot 3D vision. Andrews added that Nvidia has launched a project group focused on a multi-modal AI model to take input and generate robot actions for humanoid robots.

The reason Nvidia developed a full software stack for these different kinds of applications is that “we want to understand where the constraints are,” said Andrews. “That allows our customers to take as much or as little of the stack as they need. That’s why we are building things like our AI-based robot manipulator stack and our AI-based mobile robot stack. At the heart of this is the reality that you don't want your solution to work only when everything is perfect. You want it to handle the reasonable variations that you see in real world environments.”

An inflection point

Noting how Nvidia’s simulation platform is changing the speed of robotic application development, Kumar said Teradyne’s investors often ask him when the inflection point will come in the advanced robotics and AI space; that is, when will the AI hype achieve the real-world capabilities industry needs. 

Kumar said he believes this point “will come faster than all the technology adoptions I have seen in two decades of my manufacturing life. The reason for this is that, in the past, any new technology required dedicating a part of the factory—the least risky part—to try out a new technology while being careful not to break anything. Now, most of our customers who are piloting these AI algorithms are doing it in a digital twin in the cloud to do all kinds of testing across millions of different scenarios, which will greatly accelerate adoption. So, this inflection point, as I see it, will come way faster than what the industrial world is used to.”

He pointed to recent examples of applications involving Teradyne and Nvidia that were developed by smaller manufacturers in North Carolina and Missouri and which are now being used in multiple countries. “In the past, only a big company would be able to scale up so quickly. This kind of scaling is what we are enabling.”

In the video below, Andrew Pether of Universal Robots explains how combining a Universal Robots' UR5e cobot outfitted with a camera and connected to Nvidia’s Jetson Edge AI module can be used to conduct quality inspections on a gearbox assembly.

About the Author

David Greenfield, editor in chief | Editor in Chief

David Greenfield joined Automation World in June 2011. Bringing a wealth of industry knowledge and media experience to his position, David’s contributions can be found in AW’s print and online editions and custom projects. Earlier in his career, David was Editorial Director of Design News at UBM Electronics, and prior to joining UBM, he was Editorial Director of Control Engineering at Reed Business Information, where he also worked on Manufacturing Business Technology as Publisher. 

Sponsored Recommendations

Put the Plant Floor in Your Pocket with Ignition Perspective

Build mobile-responsive HTML applications that run natively on any screen.

Ignition: Industrial-Strength System Security and Stability

Ignition is built on a solid, unified architecture and proven, industrial-grade security technology, which is why industrial organizations all over the world have been trusting...

Iron Foundry Gains Competitive Edge & Increases Efficiency with Innovative Technology

With help from Artek, Ferroloy implemented Ignition to digitally transform their disconnected foundry through efficient data collection and analysis while integrating the new ...

Empowering Data Center Growth: Leveraging Ignition for Scalability and Efficiency

Data center growth has exploded over the past decade. Initially driven by organizations moving their computer assets to the cloud, this trend has only accelerated. With the rise...