When the idea of storage virtualization first moved up from the backwater of mainframe computers to debut in glitzy, modern IT networks, many heralded the concept as a Holy Grail of storage techniques.
In its purest form, virtualization allows users to add storage capacity using inexpensive, commodity disk and tape drives and to dynamically manage those storage resources as virtual storage pools with little regard for what physically resides on the back end.
But now virtualization has to prove it has staying power. As it gradually emerges as an enterprise storage technique in products from giants such as IBM, EMC, Hewlett-Packard, and Compaq Computer, it remains mired in hype and confusion generated by a multitude of smaller vendor offerings, some of which deliver only a component of the overall potential often associated with the term virtualization.
"There's still a little hype, but virtualization is about to become a very important market sector," explains Tony Prigmore, a senior analyst at Enterprise Storage Group Inc. in Milford, Mass. "It is definitely one of the five key elements that will become a part of any comprehensive enterprise storage network."
Those five key elements consist of storage resource management, storage network management, policy management, data management, and virtualization. For virtualization to come of age, "what needs to happen is education," says Prigmore, who believes enlightenment will ultimately come from the industry giants.
"Small companies with real products and real solutions created the initial energy around appliance-based virtualization. And shortly, the big companies with their virtualization offerings will help educate customers and help companies deploy virtualization," Prigmore says.
An array of hosts
For years vendors such as EMC, Network Appliance Inc., and others have offered storage virtualization at the disk array or hardware level, whereas software companies such as Veritas Software Corp. have offered virtualization at the host level.
More recently, a number of vendors such as StorageApps Inc., DataCore Software Corp., Xiotech Corp., FalconStor Software Inc., and StoreAge Networking Technologies Ltd. have arrived offering storage virtualization at the network level, many in the form of storage appliances. But opinions on the right way to virtualize at the network level have differed significantly among the newer players, which has frustrated end-users attracted to virtualization for its simplicity.
"Vendors don't like to play together," says John Blackman, a systems analyst for the enterprise emerging technologies and consulting division at Wells Fargo & Co. in Minneapolis. Despite the proliferation of vendors in the standards bodies, "no one truly wants to enable the other" for fear of losing their competitive edge, he says.
The SAN also rises
This is typical in the world of SANs (storage-area networks). SANs exist, Blackman says, but not the way they should. "[Network administrators] make SANs work because CEOs and CIOs have been sold on the idea that we can make them work," he says.
The SAN example speaks volumes. In a homogenous or single-vendor SAN environment, techniques such as disk partitioning allow IT managers to create some virtualization, but scaling out means facing vendor lock-in. In a heterogeneous or multivendor SAN environment, competing standards and incompatible technologies force IT managers to use virtualization as a way to abstractly view data spread across different storage systems. But under that model, it is difficult to market the hardware which, in turn, would reduce the cost of storage devices and create a common platform to ease storage management.
Blackman's graphic illustration of this user dilemma: "Do I slit my throat or stab my gut?"
Supporting Blackman's view is Jon Toigo, an independent consultant and author well-known in the storage industry. Toigo believes Fibre Channel SAN's intrinsic nature as a switched, point-to-point connection makes the idea of a virtual network difficult. As a result, Toigo says that conceptualizing the role virtualization can play as a storage management tool in SANs is problematic. "Most of the [vendor] ideas right now are half-baked," he says.
Getting a real-time grip on a massive enterprise storage network to virtualize storage resources may simply be too much to ask right now, says Bruce Backa, CTO of emerging storage management company NTP Software in Manchester, N.H.
Backa believes that virtualization is an element of storage management that will ultimately be integrated into the entire IT infrastructure.
The definition of virtualization itself is a moving target, particularly because the vendors who are eager to deliver services to the customers inquiring about virtualization almost always find a way to do so.
"We have always used virtualization here, although we never called it out as the 'V' word," explains Rod Mathews, director of investor relations and former manager of technology and strategy at Network Appliance, an industry-leading NAS (network-attached storage) company in Sunnyvale, Calif.
"Our approach is when customers ask about virtualization, we typically turn that around and ask, 'What do you want to do? Why are you concerned about virtualization?' And typically the answer is around the line of, 'I don't want to be dependent on the physical infrastructure behind what I'm trying to do [with storage],' " Mathews says.
The way virtualized data is distributed, or striped, across multiple disks is another method subject to individual vendor definition. Here, two schools of thought collide.
"Our subsystem, the Magnitude, is a virtualized SAN," explains Richard Blaschke, executive vice president of marketing at XIOtech in Eden Prairie, Minn. "And what we do is take all of the physical storage devices and we create a transparent pool and allow the user to simply ask for the amount of storage he [or she] wants and we stripe the data across the physical disks in the pool, regardless of the size of the disks."
The difference between the XIOtech approach and most of the competition, says Blaschke, is that other vendors' virtualization technology is inhibited by the variety of disk sizes making up the storage network.
"If you look at disk drives as buckets of water, and you have a five-, a 10-, and a 20-gallon bucket, what [the competition does] is they put [the buckets inside] a big tub. What we do is we take those same buckets and we dump them [out into] the tub and have fluid capacity, and utilize all the available space on all the drives," Blaschke says.
Then there is the question of in-band vs. out-of-band storage virtualization.
Out-of-band virtualization manages data passing between the application and the switch or server on the way to a storage device located on the network. The management tools for out-of-band virtualization are connected directly to the switch or server which holds the drivers to communicate with them outside the direct data path to avoid turning them into a data "choke point."
In-band virtualization sits in the data path between the application and the switch.
Augie Gonzalez, director of product marketing at DataCore software in Fort Lauderdale, Fla., an in-band virtualization company, believes in-band virtualization is the only way to go.
"The way we look at it is in-band is the only way you can enhance the value of the I/O stream," Gonzalez says. "You can make the storage look better than it did originally. And [in-band] is being done already by putting in virtualization nodes that become part of the intrinsic infrastructure. They fit between the storage client and the disk arrays. Most out-of-band virtualization tools, all they can do is redirect data. If it's still a slow disk, you can't do much about it."
The controversy surrounding in-band virtualization and out-of-band virtualization will intensify as larger companies including HP, IBM, Compaq, and Dell Computer Corp. go forward with storage virtualization.
HP recently acquired StorageApps, a company that uses in-band virtualization in its SANLink product. StorageApps' SANLink technology will likely become the backbone of HP's Federated Storage Area Management initiative, which is targeted at assisting companies in making the most of their heterogeneous storage environments.
According to an IDC report, few companies offer open-system functionality, despite the fact that many of them have products similar to StorageApps technology. In addition, one of StorageApps' largest customers is Dell, which will likely continue to use StorageApps' in-band technology in its SAN appliances.
Compaq, on the other hand, is currently developing its VersaStor product, out-of-band storage virtualization technology, which will compete with SANLink. IBM enters the picture in collaboration with Compaq, as the two companies recently signed a three-year cross-manufacturing agreement that includes the development of SAN virtualization software and dual support for Compaq VersaStor technology working with IBM hardware, according to IDC.
Add to the mix a potential merger in the works between HP and Compaq, and you understand the dilemma of what choice to make now facing IT executives such as Bruce Jacobs, director of datacenter operations at ChoicePoint Direct in Peoria, Ill.
Jacobs recognizes that HP now faces the choice of pursuing two different virtualization strategies with the impending Compaq merger -- in-band or out-of-band data management.
"It's a question of what does virtualization mean, and which method is going to rule," Jacobs says.