Product Guide: EMC, IBM, and HP in SAN-to-SAN combat

Midtier SAN solutions fall into arguably the most competitive and diversified segment of the storage market, suitable for companies that find million-dollar, top-tier solutions too expensive and yet cannot live with the limited performance of sub-US$100,000, entry-level SANs.

But there are factors beyond performance and sticker price that can lead companies to choose midtier SANs. For example, a corporate structure with distributed locations may need multiple, connected midtier SANs costing just as much as a top-tier solution. In that scenario, however, a single monolithic solution simply would not work.

Furthermore, a company can start with a single midtier SAN and add more connections to the same location as the business grows. In both examples, "connected" is the key word, because this breed of SAN offers storage exactly where and when it's needed, while keeping in touch with data replications and remote mirroring.

Operating-system support is another important distinction. If you run on mainframes, think top-tier, but companies that choose a midtier solution need support for Linux, Novell, Netware, major Unix flavors, and Windows, often in the same SAN.

In essence, any ideal SAN should offer good performance and reliability, easy management, and interoperability with a variety of environments. For the midtier, however, interoperability and management become even more important, because a customer is more likely to implement multiple solutions, perhaps from different vendors.

Sun and SAN in Hawaii

With our SAN criteria in mind, I set out to compare strengths and weaknesses of three major midtier SAN solutions from EMC Corp., Hewlett-Packard Co., and IBM Corp. at the University of Hawaii Advanced Network Computing Lab. We asked each vendor to provide a preconfigured solution that included at least one storage array, limited to two disk enclosures, two FC (Fibre Channel) switches, and two host computers. Vendors were also invited to provide all the ancillary equipment; in essence, everything needed for a turnkey system.

All three vendors responded by sending equipment well in excess of our minimum requirements, to the point that our lab space became a showcase of the best the companies had to offer.

I designed my tests to assess key SAN criteria: interoperability, management, performance, reliability, and value. Establishing general criteria to assess performance for complex systems such as these is difficult, and performance is subjective, based on customer needs. Nevertheless, I used Iometer scripts to simulate read/write traffic between servers and storage arrays, verifying that the test results were within the performance range published by each vendor.

For management testing I observed the vendors' engineers executing a series of basic management tasks. I also ran these tests independently and, to test reliability features, in combination with load simulation on target hardware, noting the impact of a failure on active applications.

After these tests, I asked the vendor to consolidate the three SANs in a single fabric. The resulting multivendor storage network was also the test bed for a subsequent round of tests of storage management software, the results of which we'll publish in the near future.

All three SANs I reviewed are first-class, hard-to-break, easy-to-manage, powerful storage solutions, and they all performed similarly well in our tests. No company would go too wrong choosing any of these solutions. However, there are some differences.

IBM TotalWorks SIS (Storage Integration Server), the new kid on the block, hit the floor running with an architecture that holds plenty of hope for easier future integration and for preserving hardware investments, while showing robustness, performance, and management tools that have no cause to envy its competitors. In addition, SIS is the least expensive solution in the group, which warranted top ranking in our scoring.

The EMC Clariion CX400 sets a new mark in price per megabyte for midrange SANs: If you need a balance of fast and reliable disk space and plenty of capacity for reference data and online backups, the CX400 has no challengers. In addition, you can count on a large portfolio of management apps to get the most out of your investment.

The HP StorageWorks EVA 3000 should appeal to companies that favor easy, child-proof administration and expect long-term savings on management costs over an immediate, more moderate investment. Despite very good resilience and performance, and the easiest-to-use management tools in the group, the cost of the EVA 3000 was a surprise, priced sensibly higher than its rivals.

IBM TotalStorage SAN Integration Server

Big Blue entered the shoot out with its new TotalStorage SAN Integration Server, which includes a virtualization engine, SVC (SAN Volume Controller), the Master Console for system administration, and the FAStT600 (Fibre Array Storage Technology T600) storage array with two disk enclosures, each mounting 14 36GB FC drives for a total capacity just above one terabyte.

IBM's hardware also included the FAStT200 and the FAStT900 storage arrays, an LTO-2 (Linear Tape-Open)-based backup device, and a collection of various hosts including AIX, Linux, Sun Solaris, and Windows servers. However, all this hardware didn't generate messy cabling because the IBM rack has probably the best cable-management system in our group.

SIS, which began shipping at the end of July, combines a SAN centered around the FAStT600 with the SVC virtualization engine, which acts as an intermediary between storage arrays and applications servers. More accurately, SVC behaves as a storage retailer, acquiring LUNs (logical unit numbers) in bulk from storage arrays, breaking that space into discrete segments, and assigning virtual disks, based on those segments, to application servers connected to the same fabric.

With SVC, IBM makes block virtualization and storage provisioning independent from basic array management, an architectural difference with other solutions that should simplify adding support for other storage devices. Although SVC can currently handle only FASt and ESS (Enterprise Storage Server) storage arrays, IBM plans to add support for storage devices from other vendors in future releases.

SVC is based on a redundant cluster of two or four Linux nodes, each offering a 2GB, automatically synchronized cache, protected from power failures with two UPS systems. Even in a two-node configuration, the SVC offers a larger cache than its competitors, with obvious performance benefits. In fact, although I measured very good performance on all the arrays in the group, SIS results were consistently a notch better, probably because of the faster drives mounted on the FAStT600 and of the larger cache between hosts and arrays.

To simplify setup and administration, SIS ships with FC switches preconfigured to accommodate currently installed devices and future updates. To add the FAStT900 array, I browsed the documentation for available array ports, and then ran the cables between device and switches without having to reconfigure zones. However, storage administrators that add SIS to a complex network can implement their own zoning, adhering to restrictions dictated by the SVC architecture. For example, hosts and arrays should reside in different zones.

The SIS is managed via the Master Console, a Windows 2000 server with preinstalled software for monitoring and diagnostics. The eponymous management application for SVC has a CLI and a browser-based GUI. Interestingly, access from CLI or GUI to SVC management software is restricted via public and private keys authentication.

I found SVC easy to manage but the GUI has some annoying cosmetic glitches, such as having to refuse filtering for lists of managed objects, or having to refresh the window to see the result of some commands. Nevertheless, the comprehensive set of features of SVC is neatly arranged and, similar to NaviSphere, gives access to provisioning and volume copy tools from the same window.

SVC doesn't offer the rich set of software available for the Clariion, but the two apps that IBM included, FlashCopy to create replicas of a volume for use by other applications and Peer to Peer Remote Copy to maintain synchronized fail-over volumes locally or remotely, offer powerful options that should satisfy most requirements. For example, FlashCopy doesn't disrupt operation on the source volume and can copy consistent images of related volumes (typically, database files spread across multiple volumes) in a single operation.

EMC Clariion CX400

EMC chose for its competing array the Clariion CX400, equipped with two, full 15-drive disk enclosures -- one mounting Seagate 73GB FC drives, the other holding capacious Maxtor 250GB ATA drives -- which gave a total capacity close to 5TB. In addition, the EMC equipment included a CX600, the higher end of the Clariion line, which was used as "partner" array in some snapshot and copy operations. EMC had preinstalled NaviSphere plus other management applications on both the CX400 and the CX600.

With its two self-balancing, mutually-protecting disk controllers and full redundancy across its components, there's much to like about the Clariion CX400. In case of power failure, the data contained in the controller's 2GB cache is preserved by a UPS system and saved to the array's disks. You can stack up to four disk enclosures in its elegant rack for a total of 60 drives and still have room for expansion. In fact, the whole EMC equipment package, including the CX400 and the CX600, fit neatly in the same rack, with some space left over.

Further, the CX400 is the only array in our group that can mount ATA, in addition to FC or drives. The recently added ability to mount large ATA drives gives the CX400 exceptional capacity and flexibility at a moderate cost.

For example, you can dedicate an enclosure to the large 250GB Maxtor drives (that are less expensive than FC), which creates a roomy storage pool for snapshots or volume copies for applications that don't require top-notch performance. Equally important, you can manage that large ATA repository using the same tools -- NaviSphere and its complementary applications that control the array's FC storage --which simplifies management.

NaviSphere is one of the oldest storage-administration tools and has maintained its unique look and feel over the years and through many updates. Moreover, NaviSphere can control multiple Clariion arrays such as the CX400 and the CX600 present on our test bed. In fact, I was able to log in to NaviSphere on one array or the other and manage storage on both, which makes for a very intuitive and flexible administration. Nevertheless, NaviSphere doesn't offer tools to control storage arrays from other vendors.

Although it does not have the most intuitive GUI in our group, NaviSphere is the single launch ramp for any management app that you can purchase for the Clariion line. And you have plenty to choose from, including SnapView, for creating multiple, recurring point-in-time copies; MirrorView, for creating and maintaining up-to-date synchronous mirror images; and SANCopy, for simplifying moving or copying entire volumes across the network.

The tight integration with the NaviSphere GUI is a significant benefit when using those applications. To define a mirror image with MirrorView, I simply selected the source LUN before launching the application wizard, which is invaluable in minimizing trivial error when your SAN gets crowded with volumes. Moreover, each application is well designed to solve practical, daily data management problems.

For example, using MirrorView, I was able to detach an image from its source LUN or to suspend the replication while copying the mirror image elsewhere, both very useful options if you want to freeze your data at a certain stage, say, at book closing. Appropriately, when I suspended mirroring, MirrorView kept track of the changes to the source LUN and immediately resynched the target volume when it became available again. Expect a similar behavior every time MirrorView is unable to communicate with the LUN.

HP StorageWorks EVA 3000

HP entered the fray with its HP StorageWorks EVA 3000 array governing two disk enclosures, both mounting 146GB Seagate FC drives. Also part of HP's equipment, but not included in the evaluation, is its entry-level array: the MSA1000 with 14 Ultra320 146GB drives and the MSL6030 LTO Gen2 tape library. The management software included Command View EVA and Business Copy, both installed on the OpenView Storage Management Appliance.

When it comes to resilience, performance, and easy administration, you couldn't ask for more than the EVA 3000. However, the EVA line also includes larger models, some offering switched connections on the array backplane for improved performance and diagnostics. Moreover, having double controllers for traffic watch; hot-swappable, fully redundant components; and 2GB of cache with UPS backing gives peace of mind for business data stored on this array.

Managing the EVA 3000 requires a separate administrative device, the Storage Management Appliance. In fact, to manage the EVA 3000, I first had to log in to the Storage Management Appliance and from there open the Command View EVA, which is the Web interface to communicate with the controllers of the array.

Having what is essentially a server dedicated to administrative tasks adds some cost to the HP solution but allows administrators control of up to 16 EVA arrays. Moreover, using the proper software, the HP Storage Management Appliance can administer storage arrays from other vendors. Unfortunately, the EMC and IBM arrays in our group are not in the list of solutions supported by the appliance.

The lack of support for the large ATA drive left HP with a sensible, smaller capacity (2.9TB) than the EMC solution. Paradoxically, because capacity range is the basis of HP's software pricing, crossing the 2TB (together with the cost of the management appliance) contributed to making the EVA 3000 the most expensive solution in our group.

By contrast, its Command View EVA is the easiest to use and the most intuitive administration tool in our group. Defining a new LUN couldn't be simpler: I chose the LUN name, the size and the RAID level, and then the Disk Group from where to carve it. Typically, a Disk Group is a pool of disk drives having the same size, and built-in algorithms automatically choose where and how to provision space for a new LUN within the group. As a result, you can achieve more efficient use of the storage available and simplify administration, which should offset the larger initial investment to purchase the EVA 3000.

Business Copy is another example of HP's attention to easing storage management. As the name suggests, Business Copy is an optional tool for the management appliance that simplifies making scripts for activities, such as copying a volume from one host to another, with the additional twist that a host agent can automatically launch an application. Working on the Business Copy wizards from the management console, I easily put together a procedure that created a new LUN for host B, copied there a movie clip from a volume on host A, and launched the movie viewer on the target computer, all without touching either host.

At the end of my two-week experience with these midtier SANs, I have no doubt that each solution will respond to the many challenges of a business environment as well as it did to my tests. Nevertheless, there is a growing demand for more capacity in less space, more flexibility to integration, and less human-intensive management. Those are the requirements that future midtier SANs will have to address. I am confident that they will.

Join the newsletter!

Error: Please check your email address.

More about Brocade CommunicationsCompaqEMC CorporationGatewayHewlett-Packard AustraliaIBM AustraliaLogicalMaxtorMcDataMirror ImageNovellOn TargetProvisionStorage Networking Industry Association

Show Comments

Market Place