IBM Corp. Monday planned to release a distributed file-system technology that it said will provide storage management capabilities across storage-area networks (SAN) with disk arrays and servers from multiple vendors.
The TotalStorage SAN File System offering is based on IBM's Storage Tank virtualization software, and it works by creating a file-sharing protocol that lets servers use a distributed storage network as if it were a local file system.
"In theory, all the big servers in a SAN will be able to concurrently access the same data," said Steve Duplessie, an analyst at The Enterprise Storage Group Inc. in Milford, Mass.
But for now, the SAN File System supports only IBM's own Enterprise Storage Server disk arrays plus servers running its AIX operating system and Windows 2000. IBM is trying to convince other vendors to link their storage devices to the technology under a plan announced last spring.
IBM also plans to release versions of the SAN File System bundled with Linux-based versions of its xSeries servers "shortly," said Jai Menon, its chief technologist for storage systems architecture and design. In the spring, IBM had said those bundles would be ready to ship in December.
Keith Stevens, a systems administrator at Johns Hopkins University's Center for Cardiovascular Bioinformatics and Modeling in Baltimore, said he's waiting for the Linux versions to become available.
Stevens currently uses the Network File System (NFS) protocol to share data among Windows, Linux and AIX servers that are used to crunch data for cardiac research. But he said NFS is too slow.
In addition to Storage Tank, the SAN File System includes metadata servers and software agents that are installed on each file server on a SAN. The metadata servers keep track of information such as the physical location of data, file sizes and end-user access permissions.
Francois Fluckiger, deputy head of the OpenLab project at CERN, a nuclear research laboratory in Geneva, is testing the SAN File System with some of his servers. CERN plans to give researchers online access to huge amounts of data from an atomic accelerator that smashes nuclear particles together. In order to do that, it needs a distributed file system, Fluckiger said.
"The storage issue is one of most stringent requirements of all," he noted. "We're planning on storing 15 petabytes of data per year."