Can Artificial Intelligence Explain Itself?

Jan. 21, 2021
As an end user, you may never have to work with artificial intelligence from a developer’s perspective, but knowing how it works is key to ensuring its successful use. Fortunately, the technology can explain itself.

Though the use of artificial intelligence (AI) is increasing across the industrial automation spectrum, many in industry are still unclear about its application and benefits. This is not surprising considering that AI applications are not something most end users will ever knowingly directly interface with. AI tends to work behind the scenes, processing inputs and actions to streamline the functions of the systems that employ it.

Even though you may not need to learn how to interact with AI, having a basic understanding of how it works will likely be as important as understanding how to setup a Wi-Fi network in your home. We’ve all learned so much about Wi-Fi networks because of how much we depend on them. The same will likely be said of AI in the near future.

A few months ago, Automation World connected with Anatoli Gorchet, co-founder and chief technology officer at Neurala (a supplier of AI vision software), to better understand how artificial intelligence (AI) works in industrial inspection processes using machine vision. Following the publication of Gorchet’s insights, we learned about another AI term we were not familiar with—explainability.

This term refers to explainable AI, a set of techniques (including software code and  a user interface) that creates a human-readable path from a given piece of data to a specific decision. “In essence, these techniques bottle that intuitive understanding that an AI PhD student develops, into an intelligible, replicable process that can be delivered to the end user,” said Max Versace, Neurala’s CEO and co-founder.

According to Versace, explainability is “critical for debugging. No matter how explainable AI turns out to be, nobody will ever deploy a solution that makes tons of mistakes. You need to be able to see when AI fails and why it fails. And explainable AI techniques can help you determine whether the AI is focusing on the wrong things.”

As an example, Versace said to consider a deep learning network deployed on industrial cameras to provide quality assurance in a manufacturing setting. “This AI could be fooled into classifying some products as normal when, in fact, they are defective. Without knowing which part of the image the AI system is relying on to decide ‘good product’ vs. ‘bad product,’ a machine operator might unintentionally bias the system,” he said. “If they consistently show the ‘good product’ on a red background and the ‘bad product’ on a yellow one, the AI will classify anything on a yellow background as a ‘bad product.’ However, an explainable AI system would immediately communicate to the operator that it is using the yellow background as the feature most indicative of a defect. The operator could use this intel to adjust the settings so both objects appear on a similar background. This results in better AI and prevention of a possibly disastrous AI deployment.”

Beyond this kind of application, explainability also enables accountability and auditability, said Versace. It can help answer who designed the system as well as how it was built and trained.

“At the end of the day, humans are offloading key decisions to AI,” said Versace. “And when it comes to assessing trust, they take an approach similar to that of assessing whether or not to trust a human co-worker. Humans develop trust in their co-workers when at least two conditions are satisfied: their performance is fantastic and they can articulate in an intelligible way how they obtained that outcome. For AI, the same combination of precision and intelligibility will pave the way for wider adoption.”

About the Author

David Greenfield, editor in chief | Editor in Chief

David Greenfield joined Automation World in June 2011. Bringing a wealth of industry knowledge and media experience to his position, David’s contributions can be found in AW’s print and online editions and custom projects. Earlier in his career, David was Editorial Director of Design News at UBM Electronics, and prior to joining UBM, he was Editorial Director of Control Engineering at Reed Business Information, where he also worked on Manufacturing Business Technology as Publisher. 

Sponsored Recommendations

Put the Plant Floor in Your Pocket with Ignition Perspective

Build mobile-responsive HTML applications that run natively on any screen.

Ignition: Industrial-Strength System Security and Stability

Ignition is built on a solid, unified architecture and proven, industrial-grade security technology, which is why industrial organizations all over the world have been trusting...

Iron Foundry Gains Competitive Edge & Increases Efficiency with Innovative Technology

With help from Artek, Ferroloy implemented Ignition to digitally transform their disconnected foundry through efficient data collection and analysis while integrating the new ...

Empowering Data Center Growth: Leveraging Ignition for Scalability and Efficiency

Data center growth has exploded over the past decade. Initially driven by organizations moving their computer assets to the cloud, this trend has only accelerated. With the rise...