Artificial Intelligence Powers Robotic Inspections

Sept. 11, 2017
Avitas Systems uses AI expertise and the computational power of Nvidia DGX systems to make hazardous industrial inspection safer for oil and gas, transportation and energy industries.

Finding cracks and corrosion on an oil pipeline or refinery flare stack is no game, and yet it might make sense to turn for help to a graphics processor company that made its name in PC gaming. With artificial intelligence (AI) advances from GPU maker Nvidia, Avitas Systems can make industrial inspections at power plants and refineries considerably safer, not to mention faster and cheaper.

Avitas, a GE Venture focused on inspection services across oil and gas, transportation and energy sectors, is partnering with Nvidia to create inspection services powered by AI-trained drones and robots. Using some of the latest advances in AI computing to optimize the use of robotics, Avitas estimates that the technology could reduce industrial inspection costs by as much as 25 percent, with increased safety and reduced turnaround times.

Nvidia’s latest AI supercomputers are being used to train a variety of robots, including drones, crawlers and autonomous underwater vehicles (AUVs)—all of which can inspect areas that are difficult for humans to access, such as underwater pipelines, smokestacks and power lines. For humans, many inspection jobs are also high-risk and expensive, requiring flare stacks, for example, to be shut down for days in order to be cool enough for inspection. A robot, on the other hand, can inspect those flare stacks while they’re still in operation.

The same technology that gives video games their realistic, immersive visual capabilities is well-suited to AI—the computational power to process huge amounts of data. AI creates 3D models that are used to build paths that can be repeated, enabling the robots to collect still and video images from the same angles and locations. The repeatability of those paths means that a wide variety of images can be captured over time and input into Avitas’ cloud-based platform, where advanced image analytics can detect changes and measure exact defects on an industrial asset, such as cracks and corrosion. The platform can also rate the severity of defects, often not visible to the human eye, allowing customers to determine when equipment needs to be replaced and enabling earlier resolution of potential issues.

Nvidia DGX-1 and DGX Station systems provide the AI training—and immense computational power—used for automated defect recognition. Avitas data scientists build convolutional neural networks for image classification and generative adversarial neural networks to minimize the amount of work involved in labeling captured images. Nvidia enables Avitas to train software to process many different images and determine when it is ready to identify defects, following a variety of models.

Avitas stores deep learning models in an AI Workbench, an innovation that can process inspection data in real time and retrain the models to adapt to new use cases. “Working with Nvidia allows us to fully commercialize our cutting-edge, self-service AI Workbench, and we look forward to expanding its capabilities using the new Nvidia DGX Stations with Volta,” said Alex Tepper, founder and head of corporate and business development at Avitas. “With our workbench, our engineers can easily create and access new deep learning models that train the software deployed to recognize defects automatically at inspection sites.”

“Avitas Systems is breaking new ground by bringing Nvidia DGX Station beyond the deskside and into the field for the first time,” said Jim McHugh, general manager of DGX systems for Nvidia. “Using our latest DGX systems to help train robots and better predict industrial defects increases worker safety, protects the environment, and leads to substantial cost savings for companies.”

Read more about the AI technology from Nvidia and see an infographic that explains how Avitas is working with the technology.

About the Author

Aaron Hand | Editor-in-Chief, ProFood World

Aaron Hand has three decades of experience in B-to-B publishing with a particular focus on technology. He has been with PMMI Media Group since 2013, much of that time as Executive Editor for Automation World, where he focused on continuous process industries. Prior to joining ProFood World full time in late 2020, Aaron worked as Editor at Large for PMMI Media Group, reporting for all publications on a wide variety of industry developments, including advancements in packaging for consumer products and pharmaceuticals, food and beverage processing, and industrial automation. He took over as Editor-in-Chief of ProFood World in 2021. Aaron holds a B.A. in Journalism from Indiana University and an M.S. in Journalism from the University of Illinois.

Sponsored Recommendations

Why should American-Made Products be a top priority?

Within this white paper, Shalabh “Shalli” Kumar, founder of AVG Advanced Technologies, stresses the importance of prioritizing American-made products to safeguard the country'...

How to Improve Production Accountability in Manufacturing

David Greenfield, Automation World's Editor-in-Chief, and Shalli Kumar, founder of EZAutomation, discuss the idea of production monitors: a preprogrammed PLC/LED display that ...

HALT/HASS: The Ultimate Test for Reliability

Discover how companies like EZAutomation push the limits of reliability with HALT/HASS testing, originally designed to mimic the extreme conditions of space shuttle launches. ...

Your Next Production Monitor Is Only a Few Clicks Away

Shop for your very own EZ Production Monitor. It's designed for non-technical staff, so there's no programming required! It combines pre-coded firmware, real-time data, and WiFi...