Big Linux systems plot climate change, simulate nuclear explosions, and secure bragging rights. But IT customers are starting to find that high-performance computing technologies make a difference in the real world, from clustered processing to data center greening.
High Performance Computing (HPC) has long been the watchword of supercomputing centers, where huge government and university research centers, such as the Lawrence Livermore and Sandia national laboratories, use Linux to process huge data sets. Now, HPC spinoff technnology is arriving in the enterprise and SMB Linux markets.
Just as the US space program afforded such innovations as scratch resistant sunglasses, all-weather winter radial tire for cars, and equipment for hospitals to monitor patients' vital signs, so has HPC been the source of innovation for smaller scale architectures. The technology used to run HPC is spilling over to mainstream IT, reaching smaller entities and companies with a much smaller scale than Livermore use or Sandia National Laboratory might have. Enterprise and SMB market have an abundance of resources to draw from traditional HPC computing.
"Today, many more organizations are able to take advantage of High Performance Computing, due to the ready availability of inexpensive compute clusters powered by Linux running on off-the-shelf x86 hardware, as opposed to the proprietary hardware and software of yesterday's supercomputers," says Sam Charrington, Vice President of Product Management and Marketing for Appistry. The accessibility and availability of HPC technology is a big driver for scaled down markets, and so is another consideration: physical environmental controls.
Bill Thirsk, CIO of Marist College in Poughkeepsie, New York, believes that innovations from HPC that avail themselves to mainstream SMB and enterprise IT environments are many: in physical environmental controls, efficient processors, scalability with lower power consumption, and less need for network fabric.
An emerging topic of concern for today's CIO is a trend to a "green" data center environment and innovations developed from HPC are aiding this direction.
"The 'greening' of the datacenter was once a good idea to save a little money," says Thirsk. "It is now an imperative. HPC cooling and environment control technologies are leading the way."
Thirsk points out that with a server farm or blade center, many independent power supplies are running equipment that is not running at capacity, and they are all generating heat. "The supercomputers and mainframes have evolved to include integrated cooling, smaller footprints, more efficient power allocation for CPUs, and consolidated network connections, making the need for multiple network wires running back to multiple switches and other network gear obsolete," he explains. "These technologies are finding their way to the smaller capacity servers, network gear, and most notably self-cooling network racks."
Justin King, systems administrator for the Human Neuroimaging Laboratory at Houston, Texas based Baylor College of Medicine in the Department of Neuroscience concurs with Thirsk that advances in power efficiency developed in a large-scale HPC environment are available to mainstream IT computing environments. "Machines are insanely powerful now and customers are demanding improvements in energy consumption," says King. "We're currently deploying a cluster which has 16 cores and 32GB RAM in 1U. In 1.7" of rack height you have as much power now as you did in 5-8U just 4 years ago. With the density now, everyone is noticing lots of spare CPU cycles-hence the move towards virtualization. With that said, I think we are starting to see the move towards utility-on-demand and cloud computing, and I believe that trend will continue. Given an abstract problem and the ability to scale out as needed, I think the power and cooling issue will present less of an issue as adding compute power to an application will be as simple as adding a few more servers to the cloud."