The days of relying on shrinking transistors to achieve performance gains were over, and the chip industry needed to enter a new era of innovation where system-level features were just as important as thinner transistor gates, vice-president and chief technologist at IBM's Systems and Technology Group, Bernie Meyerson, has told the Fall Processor Forum in a keynote address.
For years, chip designers had been enabling huge increases in processor performance by sticking very closely to a two-year cycle of process technology reductions, he said. Smaller transistors allowed chip designers to crank up the clock speeds, add more cache memory and reduce the size of their processors without having to change many features from generation to generation, he said.
The advent of the 90-nanometre process generation had changed that strategy for most chipmakers, Meyerson said.
Chips were now so small that atom-level defects on a silicon chip could cause power leakage up to 100 times the normal level, he said.
While current designs relied on innovations within the processor, future performance increases would be about chip and system-level innovation, Meyerson said.
These future innovations included dual processing cores, embedded memory and software, he said.
"This industry has begun to travel down a road not travelled," Meyerson said. "We need to identify the things that will win in the future."
The concerns presented by Meyerson weren't a surprise to the audience of chip designers and industry followers. Problems with power leakage at 90nm and industry leader Intel's shift away from skyrocketing clock speeds have been top concerns for just about every major microprocessor vendor and analyst firm over the last year.
In the 1980s, complementary metal oxide semiconductor (CMOS) transistors replaced bipolar transistors in order to keep the rate of processor innovation on track, Meyerson said.
The industry needed a similar type of "Plan B" right now, but there were many different opinions as to how that shift should be accomplished, he said.
Dual-core designs are one way that the industry hopes to keep performance on track.
IBM has had a dual-core processor since the introduction of the Power 4 in 2001, and much of the industry was planning to follow in those footsteps, Meyerson said.
Two individual processor cores running more slowly than a single-core processor can outperform that chip without a huge increase in power consumption.
However, chipmakers must avoid the temptations to cram as many processor cores as possible onto a chip, Meyerson said. This would eventually lead the industry down the same path blazed by faster and faster single-core designs, where relying on a single method eventually runs into a wall, he said.
As might be expected, Meyerson pointed to an IBM product as an example of how processors needed to evolve.
IBM's recent Blue Gene supercomputer was powered by thousands of chips that ran much slower than many processors, but took advantage of system-level engineering to enable performance, he said.
The Blue Gene supercomputer built for Lawrence Livermore National Laboratories recently posted a Linpack score of 36.01 teraflops, or 36 trillion floating point operations per second, enough to reclaim the title of the world's faster supercomputer. But the most interesting story about Blue Gene was that it achieved that performance with a system that was a hundred times smaller than the Earth Simulator machine it edged out, Meyerson said.
It also consumed 28 times less power than the Earth Simulator, he said.
Transistors will continue to shrink, Meyerson said.
IBM, Intel and other chip companies are steadfast in their determination to shrink process technology generations every two to three years. But recent chipmaking innovations such as strained silicon and silicon-on-insulator technology will grow more important with each successive process generation shrink and technologies such as virtualisation will become widespread, he said.
"There are trajectories forward that are enormously promising," Meyerson said.