U.S. efforts to develop the next-generation high performance computing (HPC) platform are lagging because they don't have government funding. In China, it's a much different story.
China has impressed analysts with its rocket-speed commitment to HPC. It had 72 systems in this month's Top 500 supercomputer list, making it the No. 2 HPC user in the world. Five years ago, it had just 10 systems in the Top 500 list.
Along the way to achieving its HPC goals, China built what was for a time the world's most powerful supercomputer, the Tianhe-1A.
The U.S. remains far and away the leader in the field for now, with 250 HPC systems on the Top 500 list. U.S.-based tech firms build most of the world's systems.
But U.S. dominance today is no guarantee of future success. Last week at SC12, the annual supercomputing conference, a panel of HPC researchers from China was asked about that nation's exascale plans.
Depei Qian, a professor at Beihang University and director of the Sino-German Joint Software Institute, said that China has historically been five years or more behind the U.S. Although China has tried to close the gap in recent years, Qian said: "Still, I guess three to five years will be the reality" of the gap between the two nations' efforts.
Earl Joseph, an HPC analyst at IDC, had a different take. "The Chinese are being very polite -- their goal is to build it (an exascale system) first," he said.
Part of China's effort includes building an indigenous tech industry. "What I think is interesting is the dedication (in China) to creating a home-grown economy for computing," said Pete Beckman, director of the Exascale Technology and Computing Institute at Argonne National Laboratory.
Beckman points to the way China is building large systems.
For its Tianhe-1A system, China turned to U.S. chips -- Intel's Xeon processors -- but used a China-developed interconnect. With its Sunway BlueLight supercomputer, China used its own chip, the Shen Wie SW 1600 microprocessor, but with InfiniBand interconnects.
"You can see what they're doing," said Beckman, explaining that China's developers reduce risk by mixing and matching standard technologies with homegrown approaches.
"Now, you can see what's going to happen," said Beckman. "You take your homegrown CPU, the homegrown network, and you put them together and you have a machine that from soup to nuts is a technical achievement for China and is really competitive."
The quest to build an exascale system that's 1,000 times more powerful than the petaflop systems being deployed today may be the biggest challenge yet in HPC. It requires new programming models and methods to manage data and memory, along with improved system resiliency.
HPC researchers are cooperating internationally, to various degrees, on developing exascale system software.
William Harrod, research division director in the advanced scientific computing in the Department of Energy's Office of Science, told attendees at the SC12 conference that this international collaboration is needed. "I personally believe there is no way to achieve these goals [of building an exascale system] by any one government, one country - it far exceeds what people are going to invest and also exceeds the technical talent, so collaboration -- that's easy," said Harrod.
"The competition is not on the computer systems," said Harrod. "The competition is on the science that you perform on the systems and what you do with that."
The U.S. has not yet funded its Exascale Computing Initiative nor has it put a price tag on it, although it's expected to cost billions of dollars. Congress is expected to get a budget request for exascale system development in the 2014 fiscal year budget, which begins next October.
Addison Snell, CEO of Intersect360 Research, said the development of petaflop systems points to what may happen with regard to exascale.
The first petaflop system, IBM's Roadrunner at Los Alamos National Lab, was a custom, hybrid design. It was soon surpassed by China's Tianhe-1A, which relied heavily on accelerators to achieve its high Linpack benchmark score, said Snell.
Although some could argue that Cray's Jaguar at Oak Ridge National Lab was more efficient and more productive at the time, "the public perception was that the U.S. had lost its lead in supercomputing. This was exacerbated when Japan's K Computer became the first to break the 10 petaflop barrier," he said.
The newest top-dog system, the 20-petaflop Titan at Oak Ridge, "brings the U.S. back to the top, as the first U.S. system in that top echelon that relies heavily on accelerators to hit its number," said Snell.
"I believe the Chinese can and may build an exaflop computer by the end of the decade," said Snell. "The U.S. may intend to wait for a more sophisticated design, but it will have to deal in the meantime with the public perception that China will have passed us by."
U.S. researchers and vendors are beginning to talk more about "extreme computing" versus exascale computing. Extreme computing is typically put at between 500 petaflops to one exaflop. The idea is that the outright power of the machine isn't as important as the science that can run on it -- and it's the latter capability that will be the true accomplishment.
Meanwhile, there was talk at the conference that China will announce a large system in June, just in time for the next update of the Top 500 list. That system, based on what Joseph has heard, could be something in the 30-petaflop range.
Patrick Thibodeau covers cloud computing and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov or subscribe to Patrick's RSS feed. His e-mail address is firstname.lastname@example.org.
Read more about high performance computing in Computerworld's High Performance Computing Topic Center.