5 ways you waste money on virtualization
- 01 April, 2011 05:23
More than three quarters of U.S. companies virtualize at least some of their x86-based servers, but few get their full money's worth out of virtualization efforts -- due to management blunders, analysts say.
The biggest misconceptions focus around three issues: how closely to manage virtual machines, how to plan the capacity and workload of the virtual infrastructure and how to go beyond technical configuration to keep operational costs from running out of control, according to analysts.
Here is some food for thought regarding the top 5 money-gobbling mistakes, spanning technical/operational, management and planning, and budget issues.
1. Underutilization of Physical Servers
The most direct reason companies fail to get the best return on their virtual infrastructures is that they don't run enough virtual machines on every physical server, according to Galen Schreck, principal analyst at Forrester Research, who specializes in management and implementation tools for virtual architectures.
"For a long time people kept the ratio of VMs per machine low to avoid degradation of performance," he says. "They didn't want the systems to balk, so they decided they'd be satisfied with the savings they got running at 50 percent utilization, or only putting 10 VMs on average on a server."
In 2009 or early 2010, that was a reasonable approach, because performance-management tools gave a poor picture of how well VMs were running inside a physical server, says Dan Olds, principal of Gabriel Consulting Group, who has been doing annual surveys of Windows and Unix-server users for more than five years.
The percentage of servers within corporations continues to grow, but the level of satisfaction of the companies using them has been flat for several years, indicating they're not getting all they hoped from the new technology, he says.
Being satisfied with a set utilization percentage or level of consolidation of physical servers to virtual "leaves money on the table," Schreck says. "A lot of companies seem like they're just being cautious not to risk pushing servers to the point performance might not support an SLA during a spike, so they're not raising utilization high enough."
2. Failing To Push VM Management Tools Harder
It's easy enough to cram more VMs on a single physical server and get a higher return-on-investment for virtualization spending, Schreck says.
That doesn't really solve the problem, though.
Current performance-management tools such as Microsoft's Systems Center Virtual Machine Manager and VMware's vCenter Server have capabilities that are light years ahead of two or three year-old tools, but key metrics such as whether the new infrastructure is easier to manage than the old one have not changed, Olds says.
"It's not clear how many people are actually using the tools," Olds says.
Almost any company with a virtual infrastructure of any significant size is going to have VM-specific tools to manage them, Schreck says.
"What's not clear is whether they use them for more than just checking to make sure the VMs are still running," he says. "You need to be more aggressive with the tools. Rather than manage an environment scientifically with tools that tell what your VMs are doing and about your performance, people are satisfied with sticking to a specific utilization capacity. You end up having to vastly overprovision your environment and have a cost per virtual machine that could be multiple times what it has to be."
3. Failure to Think Broadly About Planning
The overall change is to think about the whole environment when planning capacity, not look at the requirements of just one set of servers or applications, according to Forrester's James Staten, VP and principal analyst at Forrester, who focuses on data center architecture.
"In traditional capacity planning an application gets twice the resources it would normally consume, twice that for when it gets busy and twice that again for headroom so it won't outgrow the servers," Staten says.
"In the virtual world one app doesn't need anywhere near that degree of headroom."You look at one app according to how it contributes to demand of the whole environment, because you can pool all your virtualized resources and apply them where they're needed. Your real target should be to raise sustained utilization for the whole environment to 60 percent or higher, and as close as possible to 100 percent for your peak."
Architecturally it's more efficient to split the data, database, server and front-end software into different layers to which you can allocate more resources when they're needed, according to Patrick Kuo, a Washington, D.C.-area consultant who has helped build Web and virtual-server infrastructures at Dow Jones, the U.S. Supreme Court, the Defense Information Services Agency and, most recently, D.C. political-news site The Daily Caller.
That's a big departure from the traditional way of thinking of server-based applications as a single unit of application/server/database and assigning resources that way, which doesn't scale nearly as efficiently as a broader n-tier approach, he says.
4. Botching Lifecycle Management
The key to keeping a virtual infrastructure from drowning in its own sprawl of VMs is to set and enforce lifecycle policies on individual applications and business units -- but that rarely happens, Staten says.
"The typical way to handle lifecycle is to set up a server and, when it falls over dead and no one notices, it's at the end of its lifecycle," he says.
"In the virtual world you have to proactively manage the lifecycles and the changes that happen within virtual machines," Staten says. "That means setting policies for provisioning, but also automating provisioning, patching, change management, end-of-life management and all the other processes you'd have to do by hand otherwise."
The key difference between a physical IT environment and a virtualized one is the volume and frequency of change within the virtual infrastructure, which is not only too much work to keep up with efficiently by hand, but also goes against the traditional view of how data center managers view the systems they manage, according to Rob Smoot, director of product marketing for VMware's vCenter management products.
"The traditional view is to set something up and then lock it down to avoid change that might break it," he says. "In virtualized infrastructures there is constant change at the infrastructure layer as VMs move from server to server or resources are reallocated. The technology has to understand that pooled infrastructure and respond effectively to it."
Tools such as vCenter and Microsoft's System Center are a lot better at that level of management than they were a year or two ago, but are still focused too much on one vendor's products, and too much on the virtual rather than the physical, to be as much use as they should be, Staten says.
Within their own arenas, they are much more effective than tools that haven't evolved specifically to manage virtual machines, however, he says.
5. Giving Up on Chargeback
One of the most effective tools to prevent sprawl and make cost-justification easier is to put in systems for chargeback -- calculating and assigning the cost for the IT resources each business unit uses, rather than letting all the costs fall into one big pot, Smoot says.
"In the physical world a lot of companies could use the process of procurement to keep control of the environment because it took time to get the approvals and get the hardware and do configuration," Smoot says. "Requesting a virtual machine is very easy, so if you're not relying on a level of process maturity that takes into account being able to efficiently monitor capacity and usage of pooled resources, often you might end up with these kinds of sprawl."
That's probably true, Olds says, but very few companies actually follow through with that. More than three quarters of companies responding to his research said payback on virtualization was important -- but only about half kept specific track of cost/benefit data, and only about one in five reported those to upper management.
"We didn't see a lot of chargeback going on," Olds says.
Join the Computerworld Australia group on Linkedin. The group is open to IT Directors, IT Managers, Infrastructure Managers, Network Managers, Security Managers, Communications Managers.
Galaxy S5 deep-dive review: Long on hype, short on delivery
NBN Co hits 105Mbps in limited FTTN trial
Satellite communication systems rife with security flaws, vulnerable to remote hacks
TPG should pay rural levy for each FTTB service: NBN Co
TPG should pay rural levy for each FTTB service: NBN Co