DEFINITION: Random-access memory (RAM) refers to the chips that are used inside a PC to store instructions and data for processes that are running. Typically, these chips are located close to a CPU, and some are even built into the CPU itself. Unlike data stored on a hard disk (also a form of memory), most RAM is volatile; it needs electrical power to retain data.
Memory is a lot like your health: you tend not to think about it unless there's a problem. In the case of computer memory, the problems are usually plodding system performance or applications that don't run properly.
Whether they're inside a desktop PC, a notebook computer or a high-end network server, RAM chips play the critical role of keeping the CPU efficiently fed with data or instructions from programs on the hard drive. How well the chips perform this role means the difference between a CPU that misses computing cycles and moves like a steam locomotive and a CPU that speeds along like a bullet train.
The RDRAM problem
The critical role that memory plays in overall system performance hit home earlier this year when Intel announced the recall of millions of PC motherboards equipped with its new 820 chip set. Originally designed to support the new direct Rambus dynamic RAM (RDRAM) memory, the 820 chips were shipped with a special converter that let the processor run synchronous dynamic RAM (SDRAM), a cheaper, more available but slower memory technology. Problems with the converter forced 820 users to endure unexpected reboots and other problems.
The glitches also knocked back Intel's release schedule for its next-generation Timna CPUs, which were also designed for pairing with RDRAM. Now the CPUs probably won't ship until next year.
Intel remains committed to RDRAM for two important reasons. First, the company is a codeveloper of the technology, along with Rambus. Second, on paper at least, RDRAM looks like the technological answer for fast memory chips that will be able to keep pace with 64-bit microprocessors such as Intel's Willamette and Merced systems, which are expected later this year.
RDRAM's speed derives from its 400MHz memory bus. Because of the architecture's added advantage of being able to transfer data twice during each CPU clock cycle, an effective data-transfer pipeline of 800MHz is possible. In addition, thanks to a 2-byte data channel in the RDRAM architecture, the technology can achieve peak data-transfer speeds of 1.6Gbit/sec.
RDRAM faced a handful of technical and business problems even before the 820 woes surfaced. RDRAM is expensive - it costs as much as three times more than competing memory technologies. Also, RDRAM requires more motherboard real estate than other memory alternatives, making it harder to incorporate into system designs. RDRAM makers are beginning to produce the chips using 0.18-micron processes which will help shrink die sizes.
Because of these problems, memory and PC manufacturers have been slow to adopt RDRAM and have remained dependent on other fast but less radical memory designs.
Today's workhorse memory standard is SDRAM, which is designed to handle data burst rates as high as 150MHz. 'Synchronous' means the chip can march in step with the CPU's system clock, which usually means zero wait states and more efficient data retrieval. However, unlike some newer chips, SDRAM can send data to the CPU only once per clock cycle.
A revved-up version of SDRAM called double data rate, or DDR-SDRAM (and sometimes SDRAM II), overcomes the once-per-cycle handicap. It can send data to the CPU twice per clock cycle for greater processing efficiency. Current samples of DDR-SDRAM chips are running at 266MHz. Although DDR-SDRAM is slower than RDRAM, DDR-SDRAM prices could match SDRAM's by year's end.
Prevailing economic, technical and licensing issues are keeping RDRAM and DDR-SDRAM from becoming the clear-cut successors to conventional SDRAM for high-performance computing. Nevertheless, the battle over next-generation chip technologies means fewer people are taking memory for granted any more.