BOSTON (05/23/2000) - Compaq Computer Corp., IBM Corp. and other leading vendors next month plan to finalize details for an advanced common I/O technology that will replace today's shared bus devices.
Details about the technology, dubbed InfiniBand, have been sketchy. But last week, a member of the group briefed Network World on how InfiniBand will evolve. For example, in early 2001 users can expect to see new switches and routers that can help them migrate to the new high-speed technology. Also, InfiniBand technology will be used to connect servers to remote storage and network devices, as well as to other servers via high-speed pipes.
After that, servers may be introduced that incorporate switching features - eliminating the need for external switches, says Tom Bradicich, co-chair of the InfiniBand Trade Association and IBM's director of Netfinity architecture and technology.
With InfiniBand devices in place, users could also run storage traffic out from their servers directly to a disk array without using a local hard drive on the server, Bradicich says.
The InfiniBand Trade Association, founded last fall, is run by a half dozen of the industry's biggest server players - IBM, Compaq, Hewlett-Packard Co., Dell Computer Corp., Sun Microsystems Inc., Microsoft Corp. and Intel Corp. Also connected to the association are approximately 115 other companies, including 3Com Corp., Cisco Systems Inc., Nortel Networks Corp. and EMC Corp., which plan to offer storage and communications gear based on the InfiniBand specification, to which the members are encouraged to contribute.
A draft specification was made available on the member Web site in January. The group's goal is to have a complete version ready for review this quarter with a 1.0 version release by mid-2000. Products will follow.
There are already start-ups, such as California silicon maker Mellanox Technologies, gearing up to make InfiniBand components.
The major driver for the new high-speed I/O is the speed limitation on the PCI bus in current Intel servers, observers say. The most advanced bus-based architecture, PCI-X, due out this fall, will run at a maximum of 1G-bps.
By 2001 Intel servers will have a switched fabric InfiniBand backplane capable of handling 500M bytes to 6G-bps speed per link - while pushing throughput at up to 2.5G-bps.
The InfiniBand I/O will also allow IS staff to assemble Intel-based networks containing up to 64,000 addressable networked storage devices and servers. This is something virtually impossible today because of bus bandwidth limitations, says Bradicich.
In the next two to three years, the maturing InfiniBand architecture could actually let vendors shrink external communications or storage devices that could then be incorporated into small components of the server's I/O system.
For instance, Bradicich says IBM is designing a Host Channel Adapter (HCA) I/O component for the Netfinity server's chipset. The HCA could wrap data into Ethernet frames inside the server, and not have to use an external switch or adapter to do the framing. The idea would be to speed traffic and eliminate potential points of failure.
InfiniBand will yield desirable performance, scalability and clustering benefits to users, says David Pendery, an analyst at Illuminata, a consultancy in Nashua, N.H. He says InfiniBand is an excellent way to have clustered systems exchange data and storage and have everything function as a single system.
Pendery notes, however, that the technology is still a long way from reality, and storage and I/O technologies such as PCI or SCSI will not be supplanted by InfiniBand for years. The major challenge, he says, will be optimizing server operating systems and applications to properly exploit InfiniBand, he says.
InfiniBand sounds promising, but it must be reasonably priced and have the bugs worked out, says Ken Zorniak, vice president of Canadian company Frantic Films, a Winnipeg, Manitoba maker of visual effects for movies and television. The firm's network relies on an 8-way Netfinity server running Windows NT.
Zorniak believes having a high-speed network that consolidates storage with communications would be great for his business. He says moving huge video files from the storage device to the server to the end user and then back again is a time-consuming process, and InfiniBand sounds like a way to avoid that.