While wide use of artificial intelligence (AI) and machine learning in manufacturing may still be several years off, both technologies are beginning to make their way onto the plant floor and beyond. Potential applications for these technologies run the gamut as unprecedented amounts of data delivered via connected sensors and devices enable properly trained AI algorithms to help optimize production processes.
Still, being a relatively new technology, standards for AI are currently lacking, which could hinder further application in industry. In particular, the lack of standards could result in difficulty in implementation for operators, a lack of interoperability with other systems, poor knowledge about best practices, and even potential cybersecurity vulnerabilities.
That’s why the development of standards often signals a new technology beginning to mature. Standards don’t only help suppliers by speeding innovation—they also signal to end-users that a technology has been determined to be effective based on criteria agreed upon by numerous participants in the standard’s development process. Simply put, standards cut costs, communicate vital information, and increase reliability.
The ETSI Securing Artificial Intelligence Industry Specification Group (SAI ISG), which is currently the first organization dedicated to securing AI, recently released a report describing the primary obstacles, with a focus on machine learning and the barriers related to confidentiality, integrity, and availability at each stage of the technology’s lifecycle. In addition, the report examines broader challenges facing AI, such as bias, ethics, and the potential for misuse.
“There are a lot of discussions around AI ethics but none on standards around securing AI. Yet, they are becoming critical to ensure security of AI-based automated networks,” said Alex Leadbeater, chair of ETSI SAI ISG. “This first ETSI report is meant to come up with a comprehensive definition of the challenges faced when securing AI. In parallel, we are working on a threat ontology, on how to secure an AI data supply chain, and how to test it.”
Within the report, the machine learning lifecycle is broken down into eight stages, each of which comes with its own unique risks: data acquisition, data curation, model design, software build, training, testing, deployment, and updates.
In the data acquisition and curation stages, the predominant issue is integrity. In other words, when integrating data from multiple sources or in multiple formats, incongruities in measurement parameters or data structure could create maladaptive machine learning algorithms resulting in poor or even dangerous decision-making. Moreover, the report also considers the possibility that a malicious actor might intentionally poison data to sow chaos within an operation.
While the model design and software build stages are identified as being relatively safe, the report finds that similar—or even more severe—security issues could present themselves in the training stage of a machine learning algorithm.
For instance, the confidentiality of a training dataset could be compromised if an attacker were to augment it with synthetic input data designed to trick the algorithm into outputting labels containing information about the original training data. This type of data leak could entail business-related information such as intellectual property or sensitive personnel information.
Currently, no definite standards to ameliorate these issues have been put forward, though by clarifying the primary concerns surrounding AI and machine intelligence, SAI IG hopes the process can begin.