Deploying the Edge for Real-Time Analytics

Nov. 2, 2018
Though the relative newness of edge computing requires users to navigate evolving definitions for the technology, its ease of deployment and immediate benefits can be obtained if you focus on answering the right questions.

By now, we all know that cloud computing is an excellent asset for storing data that doesn’t have a need to be immediately analyzed or accessed. But what is the solution for data that does require real-time processing?

That’s where edge computing comes into play.

Considering that edge computing is still a relatively new technology, you might be asking yourself questions such as:

  • What exactly is edge computing?
  • What are the benefits of processing data at the edge?
  • How is the edge different than the cloud?

These are questions that industrial and manufacturing companies are contemplating when thinking about how to incorporate edge into their operations. But because edge computing is such a new concept, there isn’t one definition that can answer these types of questions for every user. But that doesn’t mean the questions can’t be answered in a way that will help you make an effective decision about this new area of technology.

Dueling definitions

At the present, there are several definitions that explain edge computing:

  • Gartner says the “edge” is the physical location where things and people connect with the networked digital world.
  • The OpenFog Consortium defines edge computing as the process of placing data and data-intensive applications at the edge (i.e., on premise) to reduce the volume and distance that data must be moved.
  • The Linux Foundation defines edge technology as being a tool to improve performance, operating cost and reliability of applications and services. The foundation goes on to explain that edge computing shortens the distance between devices and the cloud, thereby mitigating the latency and bandwidth constraints of today’s Internet, resulting in the development of new applications.

Taking these three definitions into account, we can arrive at this general concept: Edge computing enables data and analytics gathering at the source, and involves pushing computing applications, data and services away from centralized locations to the “edge” of the network.

This seems straightforward enough, but it gets a bit more complicated when delving into the approach and purpose of specific technology deployments. This is especially true if you’re thinking about deploying an edge device to help with your operation’s real-time analytics capabilities.

If that’s the case, here are three questions to keep in mind.

How much data do you have and where is it stored?

With edge computing, companies benefit from real-time processing capabilities, decreased latency and reduced costs. When considering how to deploy edge computing, knowing the amount of data that your operations will be processing and storing at the edge will ultimately help you determine the best course of action.

Given the broad range of industries and processes that could benefit from edge computing, it’s impossible to predict how much data individual industrial and manufacturing companies will actually push to the edge in the long run. What we can be confident of is that edge computing needs will only increase. New research from Gartner estimates that, by 2022, 50 percent of data is going to be created and processed at the edge.

How connected is your facility?

Most edge definitions presume that high levels of connectivity are required for edge devices. However, many industries have been deploying systems that would now be considered “edge” using minimal connectivity to the outside world. For example, the oil and gas industry has been using edge computing to monitor conditions on remote rigs located hundreds of miles away from the nearest data center. In these scenarios, the edge computing systems share only a subset of the most important data with core systems at headquarters or in regional data centers.

How secure are your operations?

A key difference about many edge environments is that there tend to be fewer humans around to effectively manage the hardware and software. In the past, limited or no connectivity often meant that these systems or sites were largely ignored. However, as these remote sites and systems become more connected, a higher level of security is needed. In short, you will need some sort of edge security strategy for these environments, and it is a good idea to look at them as having unique requirements rather than simply viewing them as an extension of your existing security measures.

When thinking about how to deploy edge devices, consider your operation’s data storage, connectivity levels and security requirements to best determine the right course of action. It’s also important to keep in mind the newness of the edge concept as a whole. As more use cases are developed for new operations, edge devices will become even more varied in their use.

For more information, visit Stratus Technologies at www.stratus.com.

Companies in this Article

Sponsored Recommendations

Why Go Beyond Traditional HMI/SCADA

Traditional HMI/SCADAs are being reinvented with today's growing dependence on mobile technology. Discover how AVEVA is implementing this software into your everyday devices to...

4 Reasons to move to a subscription model for your HMI/SCADA

Software-as-a-service (SaaS) gives you the technical and financial ability to respond to the changing market and provides efficient control across your entire enterprise—not just...

Is your HMI stuck in the stone age?

What happens when you adopt modern HMI solutions? Learn more about the future of operations control with these six modern HMI must-haves to help you turbocharge operator efficiency...

AVEVA™ System Platform: Smarter, Faster Operations for Enhanced Industrial Performance

AVEVA System Platform (formerly Wonderware) delivers a responsive, modern operations visualization framework designed to enhance performance across all devices with context-aware...