The A-Z of server virtualization

Dividing physical servers into virtual servers is one way to restore sanity and keep IT expenditures under control

In today's complex IT environments, server virtualization simply makes sense. Redundant server hardware can rapidly fill enterprise datacenters to capacity; each new purchase drives up power and cooling costs even as it saps the bottom line. Dividing physical servers into virtual servers is one way to restore sanity and keep IT expenditures under control.

With virtualization, you can dynamically fire up and take down virtual servers (also known as virtual machines), each of which basically fools an operating system (and any applications that run on top of it) into thinking the virtual machine is actual hardware. Running multiple virtual machines can fully exploit a physical server's compute potential -- and provide a rapid response to shifting datacentre demands.

The concept of virtualization is not new. As far back as the 1970s, mainframe computers have been running multiple instances of an operating system at the same time, each independent of the others. It's only recently, however, that software and hardware advances have made virtualization possible on industry-standard, commodity servers.

In fact, today's datacentre managers have a dizzying array of virtualization solutions to choose from. Some are proprietary, others are open source. For the most part, each will be based on one of three fundamental technologies; which one will produce the best results depends on the specific workloads to be virtualized and their operational priorities.

Full virtualization

The most popular method of virtualization uses software called a hypervisor to create a layer of abstraction between virtual servers and the underlying hardware. VMware and Microsoft Virtual PC are two commercial examples of this approach, whereas KVM (kernel-based virtual machine) is an open source offering for Linux.

The hypervisor traps CPU instructions and mediates access to hardware controllers and peripherals. As a result, full virtualization allows practically any OS to be installed on a virtual server without modification, and without being aware that it is running in a virtualized environment. The main drawback is the processor overhead imposed by the hypervisor, which is small but significant.

In a fully virtualized environment, the hypervisor runs on the bare hardware and serves as the host OS. Virtual servers that are managed by the hypervisor are said to be running guest OSes.

Para-virtualization

Full virtualization is processor-intensive because of the demands placed on the hypervisor to manage the various virtual servers and keep them independent of one another. One way to reduce this burden is to modify each guest OS so that it is aware it is running in a virtualized environment and can cooperate with the hypervisor. This approach is known as para-virtualization.

Xen is one example of an open source para-virtualization technology. Before an OS can run as a virtual server on the Xen hypervisor, it must incorporate specific changes at the kernel level. Because of this, Xen works well for BSD, Linux, Solaris, and other open source operating systems, but is unsuitable for virtualizing proprietary systems, such as Windows, which cannot be modified.

The advantage of para-virtualization is performance. Para-virtualized servers, working in conjunction with the hypervisor, are nearly as responsive as unvirtualized servers. The gains over full virtualization are attractive enough that both Microsoft and VMware are working on para-virtualization technologies to complement their offerings.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about AMDIntelKVMMicrosoftProvisionSpeedVIAVMware Australia

Show Comments