Vendors and industry observers Monday had a chance to debate the prospects for InfiniBand, the upcoming bus architecture that promises to alleviate many of the bottlenecks now clogging servers, storage, and other network devices.
The consensus of a roundtable here sponsored by VIEO Inc., a company that makes InfiniBand software management technology, was that users won't begin to see InfiniBand products before year-end, but that the wait may well be worth it. InfiniBand's architecture will let users accommodate many high-bandwidth types of traffic that will increasingly be crossing through their data centers, including streaming video, voice, and audio files.
The panel included VIEO Chief Technology Officer (CTO) James Mott; Chris Gahagan, vice president of recovery and storage at BMC Software Inc.; and Michael Krause, senior interconnect architect with Hewlett-Packard Co. Other panelists included Duncan McCallum, a partner at venture capital firm OneLiberty Ventures; Jim Pappas, director of initiative marketing at Intel Corp.; and Mitch Schults, vice president of business development at ExaNet Inc., a storage company with InfiniBand-enabled products in the works. Vernon Turner, an analyst with International Data Corp. (IDC), served as the panel's moderator and also gave a presentation on the potential for the InfiniBand market.
IDC's Turner said the research firm predicts that by 2004, 80 percent of all server shipments will be InfiniBand-enabled. Other IDC research suggests that revenue from InfiniBand devices will reach about US$2 billion by 2004. That means that of the approximately 6 million servers shipped in 2004, 4 million could be InfiniBand-enabled.
For customers, InfiniBand products should ease the bottlenecks that even the latest PCI-X-based servers will have. That's because InfiniBand uses a switched-fabric backplane that can move data at up to 2.5G bit/sec. Current architectures support 1G bit/sec.
One obvious place for early adoption of the technology would be data centers with multiple storage controllers that require lots of work to maintain, the panel said. InfiniBand could help reduce that complexity because it provides a single interconnect for storage devices on the network.
Pappas noted that while existing interfaces require separate interconnections for network, storage, and server devices, InfiniBand provides a uniform, single method of connecting them together at very high speeds.
"PCI was fine when you had a few servers (doing a few jobs), but now you have companies with thousands of servers, and by bringing I/O out to a box you can have all devices hooked up at the edge - making it easier to add more servers."
McCallum said InfiniBand will do for the servers what storage area networks did for storage: make it easier to manage and scale to higher performance levels. "You can add processing and I/O when you want to add it - so when your applications grow, you won't have to buy a new (high-end server). Also, he added, it will be easier to add incrementally to different portions of the network.
Turner said that despite InfiniBand's obvious advantages, vendors face challenges before it is adopted in the end-user community. One of those challenges is making sure devices will work before putting throwing them over the fence to users.