For years, server and storage management have been on parallel, but separate, technology development tracks. IT executives, confronted with quickly multiplying numbers of servers and storage arrays, put them there. They've needed to treat these platforms as distinct entities that require different networks, management strategies and even staffs to maintain efficiencies within the data center.
But as companies roll out data center architectures, these two islands need to blend. Convergence, required to further simplify and improve data center efficiencies, will be quite possible with the array of new and emerging technologies. These include data center service management and automation tools, blade servers, utility and grid computing, storage-area networks (SAN), grid storage, information life-cycle management (ILM), policy-based management tools and the all-important virtualization.
Virtualization is not a new concept in the server or storage markets. Companies already are benefiting from the ability to create distinct server and storage resource pools, masking the physical components from users and applications. But integrated server and storage virtualization holds the key to true management convergence.
Where virtualization got its start
In the server market, virtualization surfaced initially for use with mainframes. In this environment, virtualization tools assisted in workload management and improved utilization.
In the late 1990s, virtualization tools emerged for Unix and Windows servers. These let multiple virtual operating systems run on one physical machine but be logically independent with consistent hardware profiles. Sometimes referred to as server resource management, these tools include partition managers, virtual machines, virtual partitions and logical partitions. Such tools have grown in importance as a means to improve server utilization rates, as well as to better align and manage application performance on different server platforms, ranging from blade servers to large symmetrical multiprocessing systems.
In storage, the earliest use of virtualization emerged in the early 1990s with the first RAID subsystems, which essentially combined that technology with aggregation. By the late '90s, in came storage virtualization appliances aimed at improving management and utilization. Since then, storage virtualization has evolved from a stand-alone technology to a feature of storage infrastructure management tools. This means it resides on host servers, on storage arrays or, increasingly, on intelligent switches in the storage network.
Storage virtualization also has enabled higher-level management functions. With a virtualization feature, data management tools can better handle snapshots, replication, capacity on demand and policy-based decisions. Volume management, also considered a form of virtualization, has become a mandatory part of most data centers with storage networks and large storage arrays. In the coming years, it increasingly will be a feature of entry-level storage arrays that target IP storage and entry-level storage networks.
Toward the fully virtualized data center
Such evolving server and storage virtualization capabilities have prompted IT executives to begin rethinking their traditional, regimented, device-driven, client/server data center architectures. Virtualization lets them consider a model in which they organize data center components as shared resources. This will culminate in an environment where all storage, server and network resources are virtualized into one pool.
The shift toward this ideal accelerates at each layer as new technologies take advantage of the growing computer and network power available to application sets. As IT executives reassess how to deploy and manage these technologies into a more service-driven utility data center architecture, large system vendors and start-ups will roll out technologies that will drive the evolution of the data center into a utility model.
Regarding the convergence of server and storage virtualization, management tools that tie together the provisioning and utilization of servers and storage in various ways will start emerging in the next several years. Most will come under the guise of the emerging data center automation market, which will grow to more than $1 billion in revenue by 2006 as more customers deploy blade servers, new generations of storage arrays and storage management tools, and larger storage networks.
Over the next five years, labor-intensive, manual tasks handled piecemeal today will migrate to automated and highly intelligent tasks. (The question remains regarding how multi-vendor and heterogeneous these approaches will be, as most virtualization tools available today tend to be somewhat tied to hardware platforms or operating systems.)
Integrated server and storage virtualization will occur as IT executives change the way they deploy data center infrastructure. This means shifting operations, application services and hardware infrastructure into more of a service model, commonly referred to as utility computing (and a number of vendor-specific initiatives). Industry drivers that will influence the integration of server and storage virtualization longer term include:
Data center service management: Many IT executives increasingly want to assign and maintain service levels at the application level, which will require better management of server and storage resources as groups instead of single entities.
Data center automation tools: These tools, which provision application, network and other resources, increasingly will take advantage of the availability, monitoring and utilization capabilities of server and storage virtualization collectively.
Blade servers: As more customers deploy blade servers, the need to virtualize the server hardware will increase to mask the physical number of servers working on a specific application. At the same time, these servers will need to integrate tightly with storage networks because of the reliance on network storage.
Network storage: Today, storage virtualization is actively used to manage SANs. As storage networks become more prolific for Fibre Channel and IP networks, the need to integrate with server virtualization technologies will increase at the array, host and in the storage network itself.
Grid computing and storage: Grid computing and grid storage technologies rely on virtualization to develop a common pool of resources (servers and storage). As these models accelerate with more commercial deployments, corporate IT executives increasingly will want to view resources via a master management console that gauges their availability, performance and utilization.
ILM: This is a new storage deployment philosophy for managing the life cycle of data from its creation to deletion. As part of this environment, the need to maintain service levels in support of specific application services will require management tools that can tap into server and storage virtualization to monitor the environment.
Two types of vendors will be actively involved in the convergence of server and storage virtualization in the coming years. The first group includes the system (storage and server) vendors, including Dell, EMC, HP, IBM, Network Appliance and Sun. Today, many provide management tools that are platform- or operating-system-specific for storage and server virtualization.
However, many of them clearly will be integrating server and storage management platforms to address their customers' long-term management needs. An example of the convergence is EMC's purchase of VMware. This convergence should result in a consolidated platform that integrates virtualization, volume management and other infrastructure management components from EMC's storage products with VMware's server virtualization products.
The second group includes management software vendors that are tackling growing layers of data center management from the application level through the back-end storage systems. This includes companies such as Veritas Software and Computer Associates. Veritas has picked up a number of companies over the past year to broaden its data center management strategy. It acquired Ejasent for application-level virtualization and availability, and Jareva for server provisioning. Veritas likely will integrate these new products with its volume management and other storage virtualization tools.
The convergence of server and storage virtualization will accelerate over the next year as vendors start to connect the use of virtualization technologies to differentiate themselves. The first integration wave will be product-specific - meaning vendors will tie functionality directly into their own server or storage management strategies. At the same time, server virtualization tools will continue to integrate more aggressively with broader policy-based management tools and frameworks over the next 24 months.
Integrated server and storage virtualization is predicted to arrive starting in early 2005, with full integration occurring during the next three years. By 2007, server virtualization will be a common way to manage server utilization, availability and provisioning, especially for industry-standard servers. At the same time, storage management tools will take advantage of storage virtualization as a feature that organizes storage capacity, either volumes or files. As a result, intelligent, policy-based storage management tools will be able to focus more on what the data actually represents and less on the actual location of the data. Server management tools, in turn, will leverage this same information to improve application performance, availability and server utilization.
A missing piece
The lack of a standard way for vendors' tools to communicate with each other presents a problem. And given heterogeneous enterprise server and storage environments, such a standard will be essential if integration is to work.
Today, the storage market has begun the shift to a standard way for device management. This standard, the Storage Management Initiative Standard, will give vendors a common way over time to perform storage virtualization. No similar standard exists in the server virtualization market, although many vendors have said they wish to begin building an industry-standard API to allow the communication of different server management tools. A likely scenario is that the Desktop Management Task Force (DMTF ) develops such a standard, which would take upward of three to four years to complete. The DMTF has begun to take a strong role in defining how utility computing components will speak to each other, and standardizing the virtualization layers will be crucial to any standard in the utility computing market.
To be sure, the vision and hope are attainable. One day, integrated server and storage virtualization will bridge the management islands that hamstring data center managers today based on hardware platform, operating environment and vendor. In the longer term, if an administrator brings a new server online, storage provisioning should happen automatically. In managing the environment, IT should clearly see the relationships between servers and the storage environment. This includes paths between servers and storage, awareness of which servers and storage are hosting application services, and integration with policy tools that manage thresholds, capacity and overall availability and performance.
Lastly, having an integrated server and storage virtualization strategy could realize the concept of autonomic computing. This includes allowing servers and storage infrastructure that will self-heal, dynamically change as requirements increase or decrease, and provide transparent migration of applications to servers and storage systems.
- Gruener is the primary analyst focused on the server and storage markets for The Yankee Group. His coverage area includes storage management, storage best practices, storage systems, storage networking and server technologies. He can be reached at firstname.lastname@example.org.
An IT checklist
To prepare today for tomorrow's integrated server and storage management: