Companies in the utilities industries face many challenges today. They are responsible for meeting government and private renewable energy goals and priorities, which vary state by state. They need to incorporate new technologies that are outside their traditional domains and balance losses in revenue (while still offering the same level of service). Most challenging of all, they need to do all of that while meeting the regulatory requirement of providing highly reliable and affordable energy to the rate payer.
In order to meet these challenges, utilities need technologies that allow them to rapidly and easily respond; virtualization is one of those technologies. Here, we discuss the core attributes of a virtualized environment as well as best-practice techniques that can help minimize the pain points when virtualizing a system.
Why virtualization matters
An increasing percentage of the critical and non-critical equipment used in the modern grid is already coming equipped to leverage the benefits of virtualization. Proven use cases include remote accessibility that decreases O&M costs, “fleet” management of assets for rapid installation of software patches and upgrades, to name a couple.
When virtualization is fully adopted, a digital twin of the physical system can be fashioned. The digital twin can assist grid planners and operators in testing various grid events without affecting real-time operations, such as: what happens when 15 residential solar arrays are added to a single feeder; or how will territorial demand fluctuate between a sunny day and a cloudy day; or when do grid operators have to balance behaviors of assets the utility does not own but are connected to their territory. This added benefit can help organizations not only answer these types of questions with confidence but, it also allows them to build contingency plans respectively.
What’s required for virtualization?
In a virtualized grid, most assets, if not all, would be controlled by specific lightweight software functions inside a software element called a container. These assets’ feed data upstream into an edge computing platform, known as node, running a larger number of containers which then feed further upstream to the decentralized edge-servers which access shared storage technology to provide control of the complete grid (see figure 1). From there, the data can interface with enterprise interfaces.
Figure 1: Virtualization involves networking the system, from the sensors to the actual assets. Data is fed through decentralized edge servers to support a range of use cases, including predictive maintenance, business intelligence, and visualization.
This schema requires three key elements: the network, the physical layer (hardware you’re virtualizing), and software architecture. The challenge is putting together a system that has the necessary interoperability elements.
Below, we list key tips and best practices in organization can follow to position itself for success.
- Choose an appropriate pilot project
Companies should start with a pilot project that targets low hanging fruit and identifies specific goals – it’s important to know what a successful outcome looks like.
- Work to remove organizational barriers
IT and OT departments have very different priorities and views. OT might be eager to explore the benefits of virtualization, while IT will likely focus on security risks and network quality matters. The goal is to shrink the gap between the two. Try engaging both teams to brainstorm how to incorporate a pilot solution with the lowest TCO possible.
- Be skeptical of vendor-agnostic solutions
Building home-grown solutions with best-in-class components may seem like a good idea, but coordinating software and hardware vendors, virtualization layers, and security vendors (when not using domain experts) can lead to major challenges. Everyone likes to talk about vendor-agnostic systems, but what they really want is technology agnostic.
- Rely on domain experts to provide solution
Virtualizing a substation involves more than just plugging in cables or loading VMWare; it’s a complex process that involves multiple layers and many nuances. Without a staff with sufficient experience to work around issues, these challenges can lead to budget and cost overruns and result in systems that fail to operate as specified. Ultimately, it comes down to a question of what a utility wants to be good at: managing power delivery or integrating a virtualized grid territory?
At a minimum, internal teams need to have enough knowledge and experience to properly assess the project at hand. Better yet, they should work with advisors who can provide needed guidance. Ideally, they would take advantage of ecosystem partners so they don’t just purchase a component, they install an interoperable system.
- Embrace ecosystem partners
No organization can be good at everything. Ecosystem partners can provide crucial assistance to companies throughout the migration. The resulting solution is easier to specify, install, and maintain.
- Share roadmap development with eco-partners
Industrial systems should never be designed in a vacuum and scalability should always be at the forefront of any architecture. Companies should choose eco-partners who have proven track records in their respective domains and a wish to share in roadmap development. Then, they can leverage those eco-partners to construct an integration roadmap for the entire territory. In-house teams should never feel like they have to integrate on their own.
- Have a hardware and software support agreement with vendors
It’s important to establish and fully understand service agreements with hardware and software vendors. Eco-partners can provide essential support for system assets in terms of selection, commissioning, and troubleshooting. They bring experience and deep domain expertise to a project, which will enable organizations to spend less time trying to solve problems and more time working on internal development processes.
- Invest in training
It’s essential to give in-house teams the tools they need to effectively operate and maintain the equipment. Does the IT shop need more training on virtualization? Would the OT team benefit from a better understanding network connectivity and third-party asset integration? Investment in the short term will pay dividends over the long haul.
Virtualization is a powerful tool that can enable a utility company to not just survive but thrive in this ever-evolving market. Today, it may be an emerging technology that delivers a competitive advantage. Tomorrow, it will be a requirement for doing business. Forward-looking organizations are already starting their journey. As with many disruptive digital technologies, virtualization requires upfront investment and effort. Following the tips above, particularly working in a collaborative environment, is the fastest way to achieve value.
Stephen MacDonald is energy and environment sector head, Advantech Industrial IoT Group.