Why Bring-Your-Own-Agent Changes Industrial Automation
Key Highlights
- Three paths exist for AI integration: Built-in vendor agents (limited but convenient), open APIs (flexible but token-heavy), or hybrid systems where your agent communicates with vendor software optimized for AI consumption.
- Remote monitoring can become AI fuel using systems that capture cycle anomalies, sensor data, downtime windows and HMI interactions in formats that LLMs can instantly diagnose without lengthy training programs.
- Plant managers can choose their preferred LLM, manage AI costs directly and integrate operational data with ERP systems without vendor lock-in or third-party data risks.
For years, AI for industrial automation has been framed as something vendors deliver to you in the form of proprietary assistants or chatbots. But that framing is already starting to collapse and a more practical idea is emerging: You won’t buy AI to help you monitor and operate robots. You’ll bring your own.
Large language models (LLMs) aren’t “intelligence” in any meaningful metaphysical sense. They’re not ghosts in the machine, or proto-AGI (artificial general intelligence). They’re compute. Another way of arranging silicon. Asking whether an LLM is “intelligent” is about as meaningful as asking whether a computer chip is “intelligent.”
And once you start viewing LLMs as compute, the implications are clear. No one buys software just because it runs on a particular CPU. No one buys an ERP system because it’s powered by electricity from ACME Energy Company. The compute is simply an ingredient the customer brings — locally, from the cloud or through whatever infrastructure they already use.
In this scenario, plant operators will use whatever AI becomes available to them, just as they use smartphones or email today. And, instead of being locked into one vendor's AI implementation, their automation systems will provide properly structured data that any AI, whether it's Claude, ChatGPT or future tools, can consume and analyze.
At this point the real question becomes: ‘What data do I need to feed to my AI agent?’
Three paths ahead
Today, AI inside SaaS (software as a service) typically operates with every application bundling its own agent. It’s simple, but also rigid. A built-in agent can only know what the vendor knows. It has no visibility into your maintenance logs, your ERP, your real production KPIs or your actual business targets.
When AI is an input that you control, you can tie your automation systems to your ERP to combine operational data with business context without the risks associated with entrusting such data to a third party.
But more flexible models are emerging:
- Built-in agents everywhere. This is available today and while it’s convenient, it’s also limited. The agent is trapped inside a single piece of software.
- APIs designed to let your own agent interact with everything. These APIs are ultra-flexible, but potentially token-heavy and not very structured.
- A hybrid model where your agent talks to vendor systems and the vendor systems respond using internal agents structured specifically for AI consumption. Here, software becomes an optimized data feeder for whatever LLM the customer chooses.
This third path is where the momentum is heading. And it’s why the concept of “feeding your agent” is a metaphor worth considering.
Data matters more than AI features
AI agents are only as useful as the data they receive. Feed your agent a bloated diet of unstructured data and you can expect subpar results. Feed it wisely and you can unlock the true value of AI to automation operations.
Industrial automation is full of dense, time-sensitive telemetry including PLC signals, cycle breakdowns, faults, sensor behavior, video streams, uptime logs and HMI interactions.
Today’s remote-monitoring tools record exactly the kind of information AI agents thrive on, including cycle anomalies, sensor flickers, downtime windows, logic misfires, HMI changes and environmental irregularities.
A human technician can interpret all this data given enough time. An AI agent can interpret this data too, but only if the data is delivered to it in a format it can understand.
Modern systems, like Olis Robotics’ remote monitoring and operating software, are being designed to structure and serve machine data in a way that LLMs can consume without confusion. It’s not a data dump, it’s data accompanied by semantic framing, contextual cues and basic interpretation.
To truly unlock the potential of AI in automation — from configuring a cell and adding or removing buttons from your HMI by voice — your systems must talk to your AI agent in a language that it understands. Remember, an LLM is the underlying AI-compute engine and an agent is an LLM that has been configured to perform specific tasks autonomously. How successfully the agent performs is, to a great extent, determined by the type of data it is fed and the tools it has access to.
You don’t have to embark on a long training program for your AI Agent in such scenarios, because you’re feeding it data that it can easily digest and act on.
Remote monitoring becomes AI nutrition
Today’s remote-monitoring tools record exactly the kind of information AI agents thrive on, including cycle anomalies, sensor flickers, downtime windows, logic misfires, HMI changes and environmental irregularities.
Instead of being locked into one vendor's AI implementation, automation systems will provide properly structured data that any AI, whether it's Claude, ChatGPT or future tools, can consume and analyze.
In practice, this means a system that captures everything happening on the robot or in an automation cell. And instead of streaming that raw data to a remote engineer or giving someone VPN access, you simply point your own LLM at the data source.
The agent then diagnoses the problem, flags anomalies, suggests likely root causes and even cross-references the result with your internal KPIs or business goals.
To illustrate how this works, consider a typical data flow using the Olis remote monitoring software to feed structured data such as PLC signals, fault codes, cycle times and video feeds to the LLM. The LLM powers the AI agent to perform specific tasks, such as diagnosing downtime, predicting failures and generating reports. And the AI agent delivers analytics and insights back to human operators.
Automation systems, LLMs and AI agents are each distinct layers, but they work together in sequence. However, without properly structured data from automation systems, the LLM has nothing meaningful to work with, which is why "speaking fluent AI" (providing clean, semantic data) matters.
Giving manufacturers more control
The beauty of bringing your own agent is that it restores control to automation end-users. You can choose the LLM. You can control the depth and cost of analysis and your AI costs. Moreover, you’re not locked into a vendor’s AI roadmap.
When AI is an input that you control, the possibilities grow enormously. For example, you could tie your automation systems to your ERP to combine operational data with business context — all without the risks associated with entrusting such data to a third party.
For plant managers, this is a genuine “take back control” moment. For years they’ve been told that AI must be purchased as a product. But, as discussed earlier, AI is just compute. This means that what matters is less who provides the model and more who controls the data and the workflow.
Moving forward with this mindset requires you to keep two things in mind: First, you’re not supposed to buy AI. You’re supposed to buy software that AI can understand and use without your help.
Second, AI agents aren’t the brain of your factory. They’re compute power that you bring, just as you bring electricity to power your automation and water to wash your hands.
Instead of dealing with traditional chatbots, automation end users should embrace systems that are built to feed AI agents clean, structured, semantically rich data. In this way, any LLM from any vendor can diagnose equipment, generate reports, propose actions and integrate with the rest of your digital ecosystem.
Video: Configuring an HMI with an AI Agent
In this video with Ryan Cox, vice president of engineering at Olis Robotics, demonstrates how to configure an HMI by talking to an AI agent.
More industrial LLM insights from Automation World:
About the Author

Fredrik Ryden
Fredrik Ryden is CEO of Olis Robotics.

