Hewlett-Packard this week announced a breakthrough file sharing product that uses new Linux clustering technology to deliver up to 100 times more bandwidth than typical clusters. The new product, HP StorageWorks Scalable File Share (HP SFS), is a self-contained file server that enables bandwidth to be shared by distributing files in parallel across clusters of industry-standard server and storage components.
The product is the second based on HP's "storage grid" architecture and the first commercial research product to use a new Linux clustering technology, called Lustre, which was developed through collaboration between HP, the U.S. Department of Energy (DoE) and Cluster File Systems.
Targeted initially for high-performance computing (HPC), HP SFS allows applications to see a single file system image regardless of the number of servers or storage devices connected to it. Built using industry-standard HP ProLiant servers and HP StorageWorks disk arrays, HP SFS provides protection from hardware failures through resilient, redundant hardware and built-in fail-over and recovery.
Tuned for ease of use and manageability, the system can span dozens to thousands of clustered Linux servers, making it easier to run distributed applications for challenging science and engineering needs.
The Lustre protocol used in HP SFS is already running in some of the world's most demanding HPC environments, such as the one found at the DoE Pacific Northwest National Laboratory (PNNL). It helps to eliminate input/output (I/O) bandwidth bottlenecks and saves users hours of time copying files across hundreds or thousands of individual, distributed file systems.
The DoE selected HP to provide program management, development, test engineering, hardware and services to support the Lustre project. HP is the only major vendor to offer a supported and case-hardened Lustre-based file share product.
"HP's Lustre implementation on our supercomputer allows us to achieve faster, more accurate analysis," said Scott Studham, associate director for Advanced Computing, PNNL. "This translates into faster time-to-solution and better science for our researchers, who are addressing complex problems in energy, national security, the environment and life sciences."
Lustre technology has been in use at PNNL for more than a year on one of the 10 largest Linux clusters in the world. PNNL's HP Linux super cluster, with more than 1,800 Intel Itanium 2 processors, is rated at more than 11 teraflops (one teraflop equals one trillion floating point operations per second) and sustains more than 3.2 gigabytes per second of bandwidth running production loads on a single 53-terabyte Lustre-based file share. Individual Linux clients are able to write data to the parallel Lustre servers at more than 650 megabytes per second. The system is designed to make the enormous PNNL cluster centralized, easy to use and manage, and simple to expand.
Studham also noted that Lustre scales the high-bandwidth I/O needed to match the large data files produced and consumed by the laboratory's scalable simulations. HP has worked with PNNL to help ensure Lustre is reliable, stable and cost-effective. "We are confident in the Lustre file system's ability to prevent loss of data," said Studham.
The HP SFS servers are factory assembled, pre-configured, pre-cabled, pre-tested in clustered I/O racks, and ready to run the Lustre software with the HP SFS added-value installation, maintenance, monitoring and administration tools.