Those of you who weren't able to attend the InfoWorld Virtualization Executive Forum in New York this week missed out on a fascinating show. A panel discussion I moderated on virtualization and Linux demonstrated that the open source community remains very much interested and engaged with this topic. But one thing that struck me and several of my colleagues, based on audience reaction to the various sessions, was just how early we are yet in the lifecycle of virtualization technologies.
Case in point: the convergence of server virtualization and storage virtualization. Vendors and analysts seem to agree that, while these technologies aren't closely related under the hood, they do support and feed into each other because they serve similar goals. Both improve resource utilization and make IT management easier.
Nonetheless, customers tend to view server and storage virtualization very differently. In a lunchtime presentation at the show, InfoWorld's vice president of marketing, Paul Calento, shared the results of a recent survey of IT managers. The majority of participants in that survey responded that server virtualization could be achieved relatively easily and inexpensively, and that they planned to do it using best-of-breed products from a variety of mainstream vendors. By comparison, those same respondents felt that storage virtualization could only be achieved expensively and with difficulty, and that they expected to use a solution from a single vendor.
These results can hardly come as a shock to anyone who has ever managed enterprise storage, but they're interesting all the same. Clearly, open source plays a role in this. While there are plenty of server virtualization solutions and tools that support or are based on open source, there has been comparatively little movement on open source storage virtualization. InfoWorld's storage guru, Mario Apicella, pointed out some open source tools in this week's Storage Insider column, but even he admits that the words open source and storage don't often go together. While much of the software industry has been drifting steadily toward open source, storage remains staunchly proprietary.
So I took the opportunity at the Virtualization Executive Forum to sit down with Brian Stevens, the CTO of Red Hat, to learn if anything was being done about this. Like Novell, Red Hat is very involved in bringing Xen virtualization technology to its enterprise Linux distribution. I wanted to know if anything similar was planned for storage.
Unfortunately, Stevens' answer was No -- at least, not yet. While he pointed out that technology like Red Hat's open source GFS (Global File System) can help virtualized server environments make more effective use of storage, higher-level storage virtualization software remains beyond the capabilities of open source developers.
That could be changing, however. Creating open source storage software is difficult today because doing so requires access to a broad range of storage technologies, many of which are proprietary and undocumented. Independent developers just don't have the resources to tackle the problem. But the desire for cost savings is driving the industry toward a new set of technologies and standards -- including iSCSI and 10 Gigabit Ethernet -- that are considerably more open.
As we've seen in other segments of the software market, these open standards are the key. Once they're in place, developers can begin to build on their foundation, stacking blocks higher and higher up the software stack until they've reached the level of the proprietary vendors today. It will take time, certainly. But Stevens doesn't think storage will stay immune to the open source momentum forever, and neither do I.