Machine Learning at the Edge

How FogHorn Systems’ updated Lightning software platform promises to deliver machine learning to Industrial Internet of Things edge and cloud computing systems.

In October 2016, I wrote about FogHorn Systems’ Lightning software platform for real time analytics and its support from industrial companies such as GE, Bosch and Yokogawa. The newest version will reportedly extend Lightning’s analytics capabilities with the addition of integrated machine learning capabilities and universal compatibility across all major Industrial Internet of Things (IIoT) edge systems, according to FogHorn Systems.

Edge computing appears to be the most likely technology through which manufacturers will connect devices and systems for IIoT initiatives. This is largely due to the fact that edge computing allows for on-premise analysis of manufacturing data, thereby mitigating bandwidth speed and cost issues associated with cloud-based analysis. It also narrows access to the data given the proper installation and use of cybersecurity technologies and practices.

According to FogHorn Systems, Lightning ML brings machine learning to the edge in three ways:

1. It leverages existing models and algorithms. Users can plug in and execute proprietary algorithms and machine learning models on live data streams produced by their physical assets and industrial control systems.

2. Makes machine learning OT-accessible. Non-technical personnel can use FogHorn’s tools to generate machine learning insights without the need to rely on in-house or third-party data scientists.

3. Runs on a tiny software footprint. With Lightning ML, complex machine learning models can run on highly constrained compute devices such as PLCs, Raspberry Pi systems and tiny ruggedized IIoT gateways, as well as more powerful industrial PCs and servers. This is possible because the Lightning ML platform requires less than 256MB of memory footprint.

It was this point—256MB of memory—that attracted my attention. I wondered how it was possible for the platform to do what it claims to do with such a small memory footprint.

“One of the key components of FogHorn's unique edge processing architecture that enables such tremendous memory efficiency is our homegrown, patent-pending CEP (complex event processing) engine,” said FogHorn CEO David C. King. “This CEP engine accomplishes in a few MB of memory what heretofore required tens or hundreds of gigabytes of memory footprint running in a cloud or data center environment.”

King added that FogHorn’s CEP engine, which is trademarked as VEL (short from for velocity), is able to do this with just 256 MB of memory because the “entire Lightning software platform is written in highly-efficient—low memory consumption—programming languages and the core VEL CEP engine has numerous native capabilities, including asynchronous streaming and cross-stream semantics, windowing and pattern matching operations, and hundreds of built-in math, statistics and physics functions. VEL also cleans, filters, normalizes and aligns streaming data to allow any machine learning or artificial intelligence model to be executed on the real-time processed metadata. All of this allows for tremendous runtime performance and ultra-low latency, enabling customers to get the benefit of both advanced analytics and powerful machine learning capabilities in a very small memory footprint.”

The accessibility of Lightning ML is another key feature according to FogHorn CTO Sastry Malladi, noting the software’s drag-and-drop authoring tool allows operators to focus on translating their domain expertise into meaningful analytics and machine learning insights.

“OT (operations technology) staff are domain experts in their respective industrial environments, but not necessarily experts in edge computing and advanced IT,” Malladi said. “By giving them intuitive tools to automate, monitor and take action on their industrial data in real-time, operators can enhance situational awareness, prevent process failures and identify new efficiencies that lead to huge business benefits. This is a very different approach from other IT-centric solutions that fail to leverage the tribal knowledge of key OT experts.”

Lightning ML’s support for ARM32 is key to its ability to run on typical plant floor systems. While the first Lightning release supported all x86-based IIoT gateways and OT systems, Lightning ML’s support for ARM32, which is one of the more widely used processors in OT control systems, makes it viable for use with PLCs and DCSs as well as Raspberry Pi IIoT gateways.

Casey Taniguchi, general manager and head of business development center at Yokogawa Electric Corporation, a supplier of process and industrial automation systems, said Lightning ML’s “support for ARM32 processors, advanced data pre-processing capabilities and streaming analytics accomplished in a tiny footprint represents a major step forward in speeding the adoption of FogHorn's technology in a wide variety of IIoT markets and industrial use cases. We look forward to working closely with FogHorn to incorporate all of these groundbreaking technologies into Yokogawa's family of advanced industrial automation solutions.”

According to the FogHorn Systems, Lightning ML software platform can run entirely on premise or connect to any private cloud or public cloud environment to give users flexibility in selecting the best deployment model in terms of IT infrastructure, security policy and cost.

More in Control