VMware, now owned by EMC, created its ESX Server virtualization product for businesses that need truly enterprise-class virtualization. ESX Server 2.1.1 implements the consolidation, dynamic provisioning, resource pooling, and all-bases-covered availability assurance of expensive system and storage hardware. But ESX Server does it with ordinary servers, modular SANs, and vanilla operating systems.
I started testing with a pair of dual-processor rack servers -- one Opteron and one pre-Nocona Xeon -- but then moved to a single Opteron DP server and a pair of stand-alone Athlon FX (single-processor desktop Opteron) systems to get a better feel for ESX Server's approach to distributed management.
My expectation going into this review was that ESX Server would perform similar to VMware's lower-end GSX Server product, just scaled for higher-volume environments. It will serve that purpose, but limiting it to the typical consolidation/isolation role strikes me as a poor investment. What's revolutionary about this product is that it creates a fabric of physical servers, VMs, and networked storage volumes that connect in any-to-any, many-to-many fashion.
VMware strongly advised me to use a heterogeneous SAN for my tests. I put an Apple Computer Inc./LSI Logic dual-port Fibre Channel adapter in each server and used an Emulex 355 storage switch to link the servers to a pair of Apple Xserve RAID disk arrays. In practice, setting up the SAN took longer than installing ESX Server and the guest operating systems, but I can't overstate ESX Server's brilliant use of networked storage. It implements its own SAN file system, replete with leading-edge features such as read/write volume sharing, file-level locking, and multipathing for transparent fail-over and volume spanning. ESX Server's virtualization layer delivers all this SAN goodness even to operating systems that don't have Fibre Channel drivers; to each guest OS the SAN looks like a simple SCSI adapter.
ESX Server handles the LAN transparently, too. When it routes around network traffic jams and card failures or relocates VMs from place to place, the guest OS is clueless. It sees the same set of network cards and the same fixed IP addresses.
VMware licenses ESX Server on a per-CPU basis. Its host core is a custom Linux kernel with a limited set of bulletproof device drivers, for reasons of stability. The hardware compatibility list for ESX Server is thus very short, but all my dual-processor Opteron and Xeon systems proved compatible without alteration.
Although they are not a specific focus of this review, I used three optional VMware products in my testing: VirtualCenter, a scalable provisioning solution; Virtual SMP, which creates dual-processor VMs (a significant advance); and VMotion, which allows you to move a running VM from one physical location to another without interrupting its execution.
VMotion, in particular, serves a compellingly practical purpose. In a service-oriented environment, it can reprovision services and all their dependencies, from databases to IP addresses, grabbing and releasing resources from a pool. For example, a service that's managing a large quantity of XML data needs a fast path to storage. VMotion can move that service to a system that's loaded with Fibre Channel ports. When the service's needs return to a nominal level, VMotion can move the service back to the pool. No connections are broken, nor are any IP addresses reassigned.
Using my setup, ESX Server's SQL Server database performance was what I'd expect from a dedicated server with a slower CPU but fabulous I/O. In fact, after my research, I'd be less likely to run multiple instances of SQL Server or Oracle on one physical machine than to run one instance each in multiple VMs.
When considering ESX Server, it's vital not to lose sight of one inescapable reality: PC servers are not designed for virtualization or hardware partitioning. Although VMware ESX Server conveys capabilities to x86 systems that come strikingly close to those of bigger iron, the performance overhead of doing all the virtualization work in software is substantial. Also, keep in mind that even on 64-bit hardware, ESX Server creates virtual 32-bit x86 systems, limiting the workload that each VM can take on. And ESX Server's network interconnects can't match the compute cycle aggregation offered by monolithic multiprocessing servers -- and blades with fast backplanes. But so much for the bad news.
For all the time I've spent with ESX Server, it will take a lot longer to uncover all of its complexities, but I know this much: There is nothing PC-like about x86 servers running this product. Those coming down to x86 from Sparc, Power, or PA-RISC hardware should consider no option other than ESX Server. And those running more than a rack's worth of x86 servers should think seriously about trading some raw performance, so often wasted, for the high-availability, ultimately reconfigurable server infrastructure that this product enables. It's remarkable -- even marvelous -- to see VMware carry IT so far with software that fits on two CDs.