Supercomputing is all about pushing out the leading edge of computer speed and performance. The sports metaphors that arise as research sites compete to create the fastest supercomputer sometimes obscure the goal of crunching numbers that had previously been uncrunchable -- and thereby providing information that had previously been inaccessible.
Supercomputers have been used for weather forecasting, fluid dynamics (such as modeling air flow around airplanes or automobiles) and simulations of nuclear explosions -- applications with vast numbers of variables and equations that have to be solved or integrated numerically through an almost incomprehensible number of steps, or probabilistically by Monte Carlo sampling.
The first machine generally referred to as a supercomputer (though not officially designated as one), the IBM Naval Ordnance Research Calculator, was used at Columbia University from 1954 to 1963 to calculate missile trajectories. It predated microprocessors, had a clock speed of 1 microsecond and was able to perform about 15,000 operations per second.
About half a century later, the latest entry to the world of supercomputers, IBM's Blue Gene/L at Lawrence Livermore National Laboratory, will have 131,072 microprocessors when fully assembled and was clocked at 135.3 trillion floating-point operations per second (TFLOPS) in March.
The computer at Livermore will be used for nuclear weapons simulations. The Blue Gene family will also be used for biochemical applications, reflecting shifts in scientific focus, making intricate calculations to simulate protein folding specified by genetic codes.
The early history of supercomputers is closely associated with Seymour Cray, who designed the first officially designated supercomputers for Control Data in the late 1960s. His first design, the CDC 6600, had a pipelined scalar architecture and used the RISC instruction set that his team developed. In this architecture, a single CPU overlaps fetching, decoding and executing instructions to process one instruction each clock cycle.
Cray pushed the number-crunching speed available from the pipelined scalar architecture with the CDC 7600 before developing a four-processor architecture with the CDC 8600. Multiple processors, however, raised operating system and software issues.
When Cray left CDC in 1972 to start his own company, Cray Research he abandoned the multiprocessor architecture in favour of vector processing, a split that divides supercomputing camps to this day.
Cray Research pursued vector processing, in which hardware was designed to unwrap "for" or "do" loops. Using a CDC 6600, the European Centre for Medium-Range Weather Forecasts (ECMWF) produced a 10-day forecast in 12 days. But using one of Cray Research's first products, the Cray 1-A, the ECMWF was able to produce a 10-day forecast in five hours.
Throughout their early history, supercomputers remained the province of large government agencies and government-funded institutions. The production runs of supercomputers were small, and their export was carefully controlled, since they were used in critical nuclear weapons research. They were also a source of national pride, symbolic of technical leadership.
So when the US National Science Foundation (NSF) decided in 1996 to buy a Japanese-made NEC supercomputer for its Colorado weather-research center, the decision was seen as another nail in the coffin of US technological greatness. Antidumping legislation was brought to bear against the importation of Japanese supercomputers, which were and still are based on improvements on vector processing.
But within two years of the NSF's decision, the supercomputing landscape changed. The antidumping decision was revoked. And the ban on exporting supercomputers to nuclear-capable nations was also partially rescinded. What had happened?
For one thing, microprocessor speeds found on desktops had overtaken the computing power of yesteryear's supercomputers. Video games were using the kind of processing power that had previously been available only in government laboratories. The first Bush administration defined supercomputers as being able to perform more than 195 million theoretical operations per second (MTOPS). By 1997, ordinary microprocessors were capable of over 450 MTOPS.
Technologists began building distributed and massively parallel supercomputers and were able to tackle the operating system and software problems that had deterred Seymour Cray from multiprocessing 40 years before. Peripheral speeds had increased so that I/O was no longer a bottleneck. High-speed communications made distributed and parallel designs possible.
As a result, vector processing technology may be in eclipse. NEC produced the Earth Simulator in 2002, which uses 5,104 processors and vector technology. According to the Top500 list of supercomputers (www.top500.org), the Simulator achieves 35.86 TFLOPS. IBM's Blue Gene/L, the current leader, is expected to achieve about 200 TFLOPS. It consumes 15 times less power per computation and is about 50 times smaller than previous supercomputers.
As detailed on the Top500 site, the trend in supercomputers is toward clusters of scalar processors running Linux and leveraging the power of off-the-shelf microprocessors, open-source operating systems and 50 years of experience with the middleware needed to pull these elements together.