Though the digital transformation of industry is still in its earliest stages, one thing is clear: data is the factor around which everything pivots. In the manufacturing industries, generating data for use in Industrial Internet of Things or Industry 4.0 initiatives is the easy part. After all, every device—from high level systems like robotics down to the simplest sensor—creates a steady stream of data when in use.
The growing use of analytics across industry needs all of this data to provide the insights that operations and business leaders are looking for from their Industry 4.0 investments. But raw machine data is not always very palatable to analytics software. The main reasons for this are data inconsistencies across machinery, lack of data contextualization and normalization, accessibility to data by operations and IT, and the inherent difficulties of managing and securing data flows.
Among these reasons, contextualization is one of the biggest issues confounding data analytics software. Think of it this way, if you send a sensor’s data point feed with the name F8:4 and a value of 52.2 into an analytics software package, how does it know if this is a temperature value, if it’s in Celsius or Fahrenheit, where it comes from, and what operating limits it correlates to (i.e., is 52.2 an acceptable reading or an indication of a problem?).
To address this issue, Tony Paine, John Harrington, and Torey Penrod-Cambra founded HighByte in 2018 to create a method for contextualizing and standardizing industrial data at the edge and managing its flow to various consuming applications. If the founders’ names sound familiar, it’s because they all previously worked at Kepware—which gives them all a deep background in industrial device communications.
HighByte refers to its data handling technology as DataOps software. Harrington says, “To get the full value from analytics, data needs to be analyzed across machinery, processes, and products. To handle the scale of hundreds of machines and controllers—and tens of thousands of data points—a set of standard models must be established within the DataOps solution. The models correlate the data by machinery, process, and products and present it to the consuming applications. A DataOps solution must be able to integrate seamlessly with devices and data sources at the operations layer by leveraging industry standards, while providing value to business applications that conform to today’s IT best practices.”
HighByte’s DataOps product is called the Intelligence Hub. Harrington says it is “the only solution on the market that combines contextualized and standardized data models with connections to industrial and IT systems, manages the flow of information, is scalable and secure, and has been developed with an edge-native approach.”
Capabilities of the HighByte Intelligence Hub include:
- Data modeling to contextualize process data by documenting it with metadata, standardizing data attributes, and normalizing units of measure. Models can be reused within a single hub and shared across hubs.
- Connection flows for the transmission of raw data or modeled information at any frequency or condition. These flows can be managed to identify, enable, or disable the flow of information to applications.
- Integration — The hub supports collection and delivery of data over OPC UA and MQTT; it also provides for the configuration and management of connections and their respective inputs and outputs.
- Security —The hub exchanges data using the built-in security of OPC UA and MQTT. By identifying outputs by connection, administrators can implement higher-level management and security than that offered by typical pub/sub broker architectures and open, unmanaged API access.
- Edge native — HighByte Intelligence Hub can run on hardware platforms including Raspberry PI and other single board computers, and industrial switches, as well as on Linux, Windows 10, and Windows server platforms.