Edge Gateways vs. Historians: Choosing the Right Option for Your Factory Floor

Understand the critical differences between edge gateways and historians to make informed decisions about collecting, preserving and utilizing manufacturing data effectively.
Dec. 10, 2025
6 min read

Key Highlights

  • Edge gateways prioritize real-time data routing by collecting, normalizing, and transmitting factory floor data to enterprise systems, focusing on uptime and speed rather than long-term storage. 
  • Historians are designed to collect and store massive volumes of time-series data in databases for long-term reporting, with emphasis on ingestion frequency and compression capabilities. 
  • Both require multi-protocol support, flexible preprocessing and scalable integration, but historians need additional features like precise timestamping and high-capacity storage.

The world lost a vast treasure when the Library of Alexandra was destroyed. Founded in the third century B.C., the library housed more than 400,000 scrolls describing everything from medicine to architecture to culture. This vast treasure trove of data, meticulously recorded by so many ancient scholars, is forever lost.

Today’s data is just as transitory. Data that is not collected, preserved, protected and utilized is at best an inconvenience and can sometimes lead to a debacle. 

The difference between the ancient world and today’s manufacturing environment is that we have two competing methods for collecting, preserving and distributing factory floor data: edge gateways and historians. These devices can be used to manage PLC data, diagnostic data from switches, energy data from drives, operational data from pneumatic and hydraulic systems, as well as power quality and environmental data from a vast array of sensors.

Because edge gateways and historians have similar features, deployment methods and results, it’s not always clear when to use one or the other. In fact, it’s easy to get this decision wrong because these devices do not play identical roles and their differences are crucial.

Edge gateways: Key features

An edge gateway collects, normalizes and routes data from PLCs, HMIs, drives and sensors to enterprise and cloud applications for use in analytics, visibility and decision-making. Edge gateways prioritize uptime and speed over data retention. Most support store-and-forward operation, but that storage is limited, as data retention is not the primary purpose of an edge gateway.

Rapid data ingestion is more essential to a historian than an edge gateway. Historians are optimized for performance, timestamp accuracy and storage of large tag sets. This is distinctly different from edge gateways, which optimize data quality, semantic tagging and efficient transmission, not bulk storage or time-series fidelity.

These gateways also come with a variety of features. Some contain analytics engines to process data locally. Some are vendor-specific, designed to support a particular architecture, such as Siemens or Rockwell devices. Others provide scripting engines enabling users to customize how data is formatted and combined.

When evaluating edge gateways, there are six essential features to look for:

Support for a variety of data ingestion sources. Nothing matters unless the edge gateway can collect the data needed by your quality team, analytics or certification systems. Multi-protocol ingestion, multiple PLC support, a variety of network interfaces and legacy device support expands the universe of potential applications.

Flexible local preprocessing and filtering. These capabilities are important because factory floor systems are diverse, complex and often old, meaning that edge gateways often encounter a variety of data types, formats, units and groupings. Raw data is seldom standardized, even from devices from the same vendor.

Custom and standardized data modeling. Once data is collected, normalized and scaled, it must be packaged in a recognizable form. Modeling is not just about organizing and aggregating data from multiple sources in a standard form but also adding the metadata that makes it understandable to a receiver.

Scalable integration with enterprise and cloud applications. The ability to publish data models to enterprise or cloud systems is another key feature. There are a variety of protocols, transports and media for this. The most common include MQTT, MQTT Sparkplug, REST, OPC UA and others. More flexible systems publish data directly to SQL databases, unified namespaces (UNS) and often provide interfaces to well-known MES and ERP systems.

Choosing an edge gateway with the correct level of security for your plant means understanding the threat vectors in the location where you will deploy the device.

Simple deployment and configuration. Fast deployment is a key feature of the more popular edge gateways. Rapid commissioning includes web-based or no-code configuration, plug-and-play device discovery and protocol auto-mapping, all designed to reduce engineering time from weeks to minutes, which is ideal for OEMs and system integrators.

Cybersecurity. In today’s environment, this should be an obvious requirement but not all edge gateway devices are cybersecure. And those devices that do provide it often have varying levels of security. Choosing an edge gateway with the correct level of security for your plant means understanding the threat vectors in the location where you will deploy the device. If deployed deep in a manufacturing system behind firewalls, no special security may be warranted. If deployed closer to IT systems connected to the internet, you may want to choose a device with advanced firewalls and other protection features.

Historians: Key features

Historians satisfy a different need than edge gateways. Where an edge gateway collects and organizes data to publish it to other systems, historians focus on collecting and retaining data in time-series databases for long-term analysis and reporting.

There are two key design considerations for historians. First, how rapidly it can collect time-series data and, second, how much storage it has to retain that data. Unlike an edge gateway, historians prioritize data collection and retention, with the exact requirements depending on the specific application. Collection requirements can vary from sub-milliseconds to the end of shift, job or month. Storage requirements can vary from kilobytes to terabytes.

Once data is collected, normalized and scaled, it must be packaged in a recognizable form. Modeling is not just about organizing and aggregating data from multiple sources in a standard form but also adding the metadata that makes it understandable to a receiver.

The key features identified above for edge gateways also apply to historians. But there are also four indispensable, additional features required of historians:

Ingestion frequency. Rapid data ingestion is more essential to a historian than an edge gateway. Historians are optimized for performance, timestamp accuracy and storage of large tag sets. This is distinctly different from edge gateways, which optimize data quality, semantic tagging and efficient transmission, not bulk storage or time-series fidelity.

Compression. When an application requires massive storage of time-series data, the ability to automatically compress data is vital. Compression schemes and speeds vary. Know your application to choose a product that meets your requirements.

Timestamping. Knowing the moment when time-scale data arrived is critical to data analytics. This requires the historian to know the correct time. Some of the mechanisms historians use to maintain current time include real-time clocks, the Network Time Protocol and the Precision Time Protocol. Note that you must have the correct network servers for some of these options.

Publishing. The ultimate destination for your data is a crucial consideration when selecting a historian. In addition to supporting the publishing features of edge gateways, historians should also support sending data to Excel. Data is often captured in a CSV file and transferred over FTP or via USB sticks. Another useful way some historians publish data is by direct transfer to larger databases using the APIs of those databases.

An additional point to be mindful of about historians is that their applications fall into two main categories: enterprise-level and cell-level. Enterprise-level historians are typically deployed in a virtual machine or in the cloud and are designed for ingestion and organization of massive volumes of data. Cell-level historians are typically used for real-time process monitoring or performance optimization. These are a good fit for applications that don’t require the volume or speed of enterprise-level historians.

The edge gateway/historian alternative

An alternative to either a historian or an edge gateway for cell-level historians is the lightweight, embedded historian from Real Time Automation. This device combines the critical features of a historian with the additional features found in edge gateways. 

With the RTA PLC Historian, tags from multiple PLCs can be captured, normalized and saved. User-defined models are filled and published on demand, without subscriptions, licensing constraints or reliance on third-party middleware. It also has configurable storage of up to 1 TB, a suite of publishing protocols (SQL, HTTP, FTP, WebSockets, USB, MQTT and email) and direct integration with InfluxDB for visualization and analytics.

About the Author

John Rinaldi

John Rinaldi

John Rinaldi is chief strategist and director of creating WOW! for Real Time Automation (RTA) in Pewaukee, Wis. John is not only a recognized expert in industrial networks and an automation strategist but a speaker, blogger, the author of more than 100 articles on industrial networking, and six books, including Industrial Ethernet, OPC UA: The Basics, Modbus, OPC UA - The Everyman's Guide and Ethernet/IP.

Sign up for our eNewsletters
Get the latest news and updates