Server Technologies Gain Virtual Power

Nov. 1, 2008
Servers continue to evolve to handle more loads and provide more efficiency to end-users. One key technology trend concerns power.

“Two or three years ago, a server had one core in it. But products now are shipped with four cores,” states Dara Ambrose, director of software and hardware engineering for Maynard, Mass.-based Stratus Technologies (www.stratus.com), a supplier of fault-tolerant, continuously available systems. He predicts that “within 12 months,” eight-core processors will be shipped.

This may be a double-edged sword, though. “If you just increase the speed of a single core, most software can handle that,” Ambrose explains. But for these newer, multiple-core units, some software can’t handle that configuration because the software “is written linearly.” This concern drives manufacturers to “look at programming languages to see if they can modify them.”

Another hot topic in manufacturing is virtualization. Specifically, Ambrose cites Virtual Machine, or VM, a technology from Microsoft Corp., Redmond, Wash., that can run with multiple cores. With more applications per server, there is reduced server sprawl, he notes. That ties into another hot topic, the “whole green (environmental) agenda: less servers, less power,” Ambrose says. At a very high level, with Linux and Microsoft Windows, one server, one application and one operating system was the model, but was “quite inefficient,” he explains. Virtualization was one way to tap into that unused power, he remarks.

Now, end-users try to virtualize critical applications. One solution is the Xen hypervisor (www.xen.org), an open-source industry standard for virtualization. “It’s been getting a lot of momentum, especially with Linux. It allows Windows and Linux running side-by-side with multiple copies of applications,” Ambrose states, adding that Microsoft has just launched what he considers to be its “first serious” virtualization product, Hyper-V.

What is virtualization’s main selling point? “You can have operational flexibility, as well as cost savings,” he says. How? “The cost of electrical power and cooling for a server is larger than the up-front cost. Just by turning them (servers) off, you can save money.”

To Reed Mullen, “virtualization technology has now become perhaps the hottest place of innovation in industry.” One principal reason is “industry recognizes that discrete distributed servers are not economical,” explains Mullen, System Z virtualization technology product manager with IBM Corp. (www.ibm.com), in Endicott, N.Y. Cost factors include floor space and energy costs of scale-up. “There’s also the system-management people cost of managing the assets that are deployed. And there’s software license.”

Virtualization’s efficiencies come from tapping into unused servers. “Many distributed servers are relatively idle. The utilization? Maybe 6 percent to 12 percent for a typical x86 server,” Mullen speculates. That translates into paying for central-processing units (CPUs) not being used, he states. And that equals wasted money.

Higher CPU utilization

In the last couple of years, Mullen has seen virtualization increasing in the x86 space. This “extreme virtualization’ or “deep virtualization” equals “an incredibly high level of hardware-resource utilization,” Mullen remarks. “It is not uncommon for a client to run an applications environment and achieve up to or more than 90 percent utilization of the CPUs.” He notes that typically, users have been reporting about 60 percent of the x86.

IBM’s technology, z/VM hypervisor, which focuses on Linux servers, allows hundreds of virtual machines to simultaneously access three or more sets of multiple resources. The implications of technologies such as this could find more use these days, as human resources get scarcer.

“If you’re a manufacturing company and have several locations, virtualization would allow consolidation of 10 to 12 applications from all plants,” suggests Ambrose. “You could manage them from a central site and might need only one to two servers totally.” He notes that some local servers might be needed for applications requiring faster response times. Either way, fewer servers handle more loads and improve efficiencies. That’s being well-served.

C. Kenna Amos, [email protected], is an Automation World Contributing Editor.

Sponsored Recommendations

Put the Plant Floor in Your Pocket with Ignition Perspective

Build mobile-responsive HTML applications that run natively on any screen.

Ignition: Industrial-Strength System Security and Stability

Ignition is built on a solid, unified architecture and proven, industrial-grade security technology, which is why industrial organizations all over the world have been trusting...

Iron Foundry Gains Competitive Edge & Increases Efficiency with Innovative Technology

With help from Artek, Ferroloy implemented Ignition to digitally transform their disconnected foundry through efficient data collection and analysis while integrating the new ...

Empowering Data Center Growth: Leveraging Ignition for Scalability and Efficiency

Data center growth has exploded over the past decade. Initially driven by organizations moving their computer assets to the cloud, this trend has only accelerated. With the rise...