As SANs expand, it's not uncommon for a significant portion of switch ports to be consumed by connections to other switches. This is due not only to the relatively low port counts of typical FC (Fibre Channel) switches, but also because of the need for higher speed interconnects, which are often achieved by bonding multiple ports together.
There is a new option to eliminate switch-port congestion and SAN fumbling, however. McData has just-released Intrepid i10K Director, an enterprise-class FC switch, has the capacity to consolidate multiple SANs. It also includes features taken for granted in the Ethernet switch world, such as creation of the equivalent of VLANs by partitioning the SAN fabric into multiple fabrics, and features for remote sites that make data distribution and replication easier, faster, and more secure.
The i10K's uniqueness, though, lies in enabling gradual moves from SAN islands to a centralized SAN architecture. For instance, you could start with one connected SAN, create a second partition, add a second SAN, and then eventually merge the two partitions, without ever disrupting access to data.
The steep price is sure to give pause, but if you do the math, it works out to about the same per-port cost as an eight-port FC switch -- but you also get the i10K's enterprise features and expandability. Based on the scalability and solid performance I saw during testing, the i10K is worth it.
Expanding on the basics
The i10K is a 14U monster with all the features you'd expect in an enterprise-class network switch: redundant power supplies, fans, control and switching modules, hot-swap capability. It delivers as many as 256 nonblocking ports of 1/2/4Gbps FC -- more than double the next best switch's total of 112 nonblocking 1/2Gbps ports -- or as much as 32 ports of 10Gbps FC, and supports FICON (Fiber Connectivity) and iSCSI (Internet SCSI). Upgrades to code on the switch take place without interrupting data flow and include backups of the old versions with easy revision if necessary.
There are two control processors on the front and eight slots for blades (Line Modules), plus four slots on the back for switching modules -- the switching fabric can be expanded along with the number of ports. Each line module supports eight paddles, which can be eight-port 1/2Gbps paddles, eight-port 1/2/4Gbps paddles (scheduled for release in Q1), or two-port 10Gbps paddles.
The 1/2/4Gbps and 10Gbps paddles can coexist on the same line card, so the switch expands in relatively small increments (eight ports at a time), all the way to 256 ports.
The i10K also supports a very large number of BB (buffer-to-buffer) credits, which are buffers that determine the maximum run length of connections. With 1,373 credits per processor (two to a blade), each blade can support two 190km connections at 10Gbps, or two 1,100km connections at 2Gbps, or two 2,200km connections at 1Gbps, over a dark fiber connection.
This will be a big benefit for users who need to set up replication over long distances for disaster recovery or Sarbanes-Oxley compliance. In terms of long-distance connections, the i10K is better than most other switches by one or two orders of magnitude.
Management made easy
Given the i10K's size and capacity, I was unable to test it in my lab, so tested the switch at McData's Santa Clara, California, facility. I set it up, created partitions, tested fail-over, and observed data flow during various changes. I was, however, unable to test the nonblocking capacity of the switch. I also did not connect 2,000km of fiber to the system for long-distance testing; the distance capacity is largely a function of available BB credits, and the capacity has been verified by McData.
You can manage the i10K with the included EFCM (Enterprise Fabric Connectivity Manager) application or with McData's SANavigator. EFCM manages the rest of McData's product line, as well as other switches, including Brocade and Cisco FC switches. The management interface is clear, with all necessary commands available through the GUI and CLI, and will send alert notifications by e-mail.
The i10K's VLAN-like partitioning really packs some punch. McData calls its VLAN equivalents FlexPars, which are physically separate segments within the switch. Currently, a FlexPar is one or more blades, but upcoming releases will allow FlexPars at the port level. Each FlexPar has a separate management IP address, so administrators can be given control of their SANs without any chance of disrupting someone else's network.
Each partition is managed through a separate instance of the management application. This method has security purposes; without the proper login, a user cannot access partitions, thus separating the box administratively as well as logically.
The FlexPar segmenting also means you can restart the switching software by partition: If a device on a SAN is causing problems, it will only affect its own associated partition, and that partition can be cleared without affecting the rest of the switch.
During testing, I created a new partition and then moved a line card and the device attached to it from one partition to the other; data continued to move without a hiccup. I also set up a fail-over from the primary control processor to the secondary one. Data was accessed continuously with no detectable interruption, thanks to the partition divisions.
The same process is used to upgrade firmware -- the secondary controller is upgraded, fail-over is initiated, and the previously primary controller is then upgraded.
The i10K switch is not inexpensive, but the capability of starting with as few as eight ports and expanding to an unrivaled 256 nonblocking ports, the 10Gbps support, the partitioning, and other enterprise features justify the i10K's price.
With the current drive to expand existing SANs and better manage them, connecting multiple SAN islands easily and partitioning consolidated SANs logically will be a boon for large organizations with fragmented storage. The US$500,000-plus price tag is not for everyone, but the i10K is a good investment for organizations hoping to create true enterprise-class storage architectures.