- 17 June, 2008 10:32
Virtualization is a superhero among technologies, transforming static, brittle data centers into dynamic, flexible resource pools and giving IT an easy way to cut costs, improve services and expand operations beyond the limits of the physical world. With great power, however, comes great responsibility.
Unchecked, those virtualized pools can turn into unruly blobs that spiral out of control and ultimately wreak havoc in the environments they were meant to save. If you can't contain virtualization, you can't manage the virtual infrastructure and you certainly will find optimizing it quite a challenge.
"Making the jump from physical to virtual requires capacity planning and management, and a lot more thought [about] the requirements around monitoring a mixed virtual environment," says Jake Seitz, enterprise architect at The First American Corp.
First American's complex environment - comprising 2,800 HP servers and 700 VMware virtual machines - demands a new approach to managing and optimizing server, storage and desktop resources, Seitz says. His group uses VMware tools to monitor the environment in what he calls a reactive manner; he now is considering third-party options. What he wants is "a proactive standpoint that provides accountability for every virtual machine," he says.
A step behind
Unfortunately, management- and automation-tool vendors are not keeping up with the virtualization technologies proliferating across IT silos, industry watchers say. Today's popularity of x86 server virtualization via VMware does not indicate that homogeneous environments will be the norm in the future. Enterprise IT managers trying to optimize resource use will create mixed virtual-server and multifunction virtualization environments. These, in turn, will demand heterogeneous orchestration, management and automation to achieve optimized performance.
"There has been so much emphasis on x86 virtual machines that, when you start talking about other types of virtualization, no one knows really what to do. There is simply much less knowledge," says Jasmine Noel, principal analyst at Ptak, Noel & Associates.
It makes a lot of sense to virtualize storage in concert with virtual servers, then automatically provision from resource pools to meet application demand. It also makes sense to virtualize user desktops. The number of people, processes and tools needed to orchestrate such an environment, however, might outweigh the value virtualization can deliver.
At First American, for example, Seitz says storage and desktop virtualization certainly will play future roles in the enterprise. Plus, he adds, the company will still have legacy environments with which to contend. "I don't think we'll want 17 different tools."
Tools that can automate across the infrastructure will be critical, says Cameron Haight, research vice president at Gartner.
"It's important to look at virtualization in a holistic fashion [because] poor design in one IT silo can impact the overall performance. It's important to have management visibility across these technology components to help us rapidly diagnose potential performance and availability problems," Haight says. "Automation technology will be the key to address the scale, mobility and other attributes that virtualization brings to the IT infrastructure."
VMware and Microsoft and Citrix . . .
The management industry in general has embraced platform-agnostic monitoring, but mostly for the physical world. If an enterprise uses VMware plus virtualization technologies from IBM, Microsoft and Sun, it's going to need virtualization-management tools from VMware, IBM, Microsoft and Sun as well.
"The reality is, no management vendor does it all yet," says Andi Mann, research director with Enterprise Management Associates, noting that CA is out in front.
Management-software makers typically don't add support for multiple platforms until customers demand it, and with VMware dominating enterprise production servers, the majority of commercial management tools focus on that environment, industry watchers say. Microsoft, however, followed its entry into the hypervisor market with news around heterogeneous virtual-server-management software, dubbed Virtual Machine Manager. In addition, such third-party software vendors as eG Innovations are beginning to add support for multiple virtual-server environments. Start-ups such as Fortisphere are building their new businesses on the value proposition that their technology can manage across virtual platforms.
"When Microsoft Hyper-V and Citrix XenServer are in production, established management vendors will start to recognize heterogeneity as part of a requirement, and that is mandatory for any start-up in the market, too," says Stephen Elliot, research director for IDC's enterprise systems. "But the bigger picture, and further complicating things beyond multiple server platforms, will be storage, desktop or other virtualization implementations. Again, you won't see the management vendors take this on until the technologies are in production environments."
Out of the silo, into the fire
The first group to experience the challenges will be enterprise IT managers who virtualize their storage along with server resources. To optimize such a virtualized environment, they would need management software that can spot when storage is at the root of server performance problems.
"With multiple layers, virtualization can obfuscate the real sources of a problem," says Jonathan Bryce, co-founder of Mosso, a Rackspace US company that provides cloud hosting and services.
Mosso has hundreds of multicore, multiprocessor HP servers and VMware virtual servers in production - and other hypervisors in various test environments - plus virtual disks running on Network Appliance storage. The virtual disks connect to a specific set of network interface cards on the back-end HP servers. The network and storage is all shared, Bryce says.
Bryce recalls an incident in which a Linux server running virtual machines started to perform very slowly and experience a high load and increased traffic. "It took us days to figure out we had overrun the I/O limits on the back-end storage and . . . that that had manifested into a slow-running Linux server," he says.
Using Hyperic HQ software, he can see into the storage level of his virtual environment and understand which servers are slow, how the I/O is performing and whether physical hosts are on target, Bryce says. "Hyperic doesn't help eliminate server sprawl, but it gives us a view into the layers of the environment, which in the past made performance diagnosis so much more difficult."
Steve Perkins, director of infrastructure at Colorado Housing and Finance Authority, uses Akorri's BalancePoint software to see inside the virtual server and storage resources in his environment. The two go hand in hand, he says, and vendors need to manage at least those two layers to help customers optimize performance across virtual environments.
"We hit a wall on performance last fall. Users were experiencing horrendous performance, our servers were being maxed out; fingers were pointing at everyone," Perkins says.
After hiring a consulting firm that used Akorri software in assessing the environment, Perkins quickly saw that problems in the storage environment were behind the poor server performance. "It showed our [storage-area network] at 100 per cent capacity, and servers with I/O threshold against our SAN were getting hit with the poor performance," he says.
Perkins also is working with Akorri to handle the virtualized storage environment he plans to implement. He wants to see the vendor delve deeper into the layers of virtual storage environments. "Akorri looks at the physical storage, and instead of just seeing those layers, we want to see where the data lies in the virtual space and provide us with the same images as the physical space from the [logical unit number] level," he says.
Automation: the secret sauce
To enable a truly fluid and optimized virtual environment, management-software makers not only have to expand their reach into multiple virtual domains but also integrate extensive automation technologies.
Ed Traylor, senior director of IT and technical operations at US-based Care2, an online community for green living and social change, would like to share storage resources across virtual instances. In essence, he wants to create a virtual SAN by orchestrating the connection between virtual machines and local disks. The ability to provision virtual machines intelligently across multiple physical hosts would require a heterogeneous virtual-management system, he adds.
Traylor has a NetApp file-attached storage system and employs iSCSI to port Web-server virtual machines to the physical hosts. Should a physical host fail, his team can quickly resurrect a virtual machine on any given blade server. Care2 runs Fibre Channel via a redundant mesh to IBM BladeCenter servers that host the virtual database servers. Traylor uses IBM's Director systems-management software to provide predictive failure analysis, data collection and automated deployment updates. Work still needs to be done to fully manage and ultimately optimize such environments, he says.
"If we're being hypothetical, then a fair assessment would be that [virtual machines] would operate in scalable, self-aware clusters that provision themselves for specific applications based on demand - and all of this would happen completely without human interaction," Traylor says. "Tasks such as provisioning, load balancing and fault tolerance would be handled by the virtual machines' artificial intelligence. From an operational or engineering perspective, all you provide is bandwidth, content and electricity."