Supercomputing goes global

Size matters in supercomputers because size translates into speed. And supercomputers are all about speed. The quest for the fastest computer to discover new drugs, crack ciphertext or model global weather and nuclear reactions has set a lot of records in a short time.

Supercomputers are defined loosely by IDC as systems that cost more than US$1 million and are used in very-large-scale numerical and data-intensive applications. Today, their power is measured in trillions of floating-point operations per second, or TFLOPS.

The current world record for computing speed is 70.72 TFLOPS, posted in November by IBM's BlueGene/L system, which is destined for the U.S. Department of Energy's Lawrence Livermore National Laboratory. But supercomputers run as much on the testosterone of competition as on DC power, so the latest performance benchmark isn't likely to last very long.

Claiming bragging rights as the world's fastest computer has been a 20-year game of technical leapfrog, involving almost as many companies as have been delisted by Nasdaq this year. The contest spans the globe. There's considerable national pride invested in the quest to build a faster machine to discover that next subatomic particle lurking just beyond the bandwidth of today's champ.

An architectural shift took place in supercomputing in the 1990s, and that shift was the background for a legendary wager. Gordon Bell, principal designer at the venerable and defunct Digital Equipment, bet Danny Hillis that the world's fastest machine at the end of 1995 would be a supercomputer with fewer than 100 processors. Bell was betting against the inexorable march of technology, saying that the bugs could not be worked out of massively parallel machines before the deadline. Hillis, a professor in MIT's artificial intelligence lab and a founder of gone-but-not-forgotten Thinking Machines Corp., was an early proponent of massively parallel computing. Smart money backed Hillis.

Hillis lost the bet. He was slightly ahead of his time because massive parallelism is more of a software problem than a hardware problem. Software developments rarely keep pace with hardware breakthroughs.

Back then, supercomputers were measured in millions of FLOPS. Since then, even supercomputers with performance in the billions of FLOPS have been relegated to the dustbin of computing history, alongside Digital and Thinking Machines. The new IBM BlueGene/L world champ has 16,384 dual-core processors grouped in 16 clusters, with each processor linked to one of five internal communications buses.

The evolution of supercomputers is like that of factory power in the Industrial Age. The first large factories were served by big, expensive, centralized power plants driving overhead belts and pulleys that powered every device in the factory -- a lot of hardware. Early supercomputer architects likewise struggled to advance their machines, using hardware from a few big, expensive, specialized processors.

This centralized factory power architecture gradually gave way to more and more decentralized power located closer to the users; steam gave way to electricity. The supercomputers in use when Bell and Hillis were matching wits were a milestone on that evolutionary path to distributed, or massively parallel, hardware with a different type of fuel.

Ultimately, little electric motors were powering factory devices in the hands of each and every worker, with thousands of power tools distributed along the production path. The historic wager between Bell and Hillis could be made again today for the year 2010. The future of supercomputing lies in greater scale in the number of processors used. The world's FLOPS champion at the end of the decade could well be using more than 1 million processors.

The world's most powerful supercomputer likely will evolve into a grid architecture of loosely coupled systems harnessed logically to a single task across a global network. A grid holds the most promise for delivering the biggest and baddest theoretical supercomputing architecture imaginable, a virtual multiple-instruction/multiple-data, or MIMD, global supercomputer.

Grid architectures rely more on specialized software than on fast hardware, and they're attracting lots of research. Users in the interconnected world of 2010 will be able to make a Faustian bargain, joining a global supercomputing grid to sell their unused compute cycles to the highest bidder. Imagine American teens selling FLOPS from video game or MP3 players to Asian weapons designers, or vice versa. It's like taking electric current from a windmill and making your power meter go backward. Anybody care to make a wager?

More about EvolveIBM AustraliaIDC AustraliaMIT

Comments

Comments are now closed

Identity crime costing Australia $1.6b every year, govt says

READ THIS ARTICLE
DO NOT SHOW THIS BOX AGAIN [ x ]