Non-volatile memory's future is in software
- 25 October, 2012 09:56
There will be a sea change in the non-volatile memory (NVM) market over the next five years, with more dense and reliable technologies challenging dominant NAND flash memory now used in solid-state drives (SSDs) and embedded in mobile products.
As a result, server, storage and application vendors are now working on new specifications to optimize the way their products interact with NVM, moves that could lead to the replacement of DRAM and hard drives alike for many applications, according to the Storage Networking Industry Association (SNIA) technical working group.
"This [SNIA] working group recognizes that media will change in next three to five years. In that time frame, the way we handle storage and memory will have to change," said SNIA technical working group member Jim Pappas. "Industry efforts are under way to remove the bottleneck between the processor and the storage."
Pappas, who is also the director of technology initiatives in Intel's Data Center Group, noted there are more than a dozen non-volatile memory competitors coming down the pike to challenge NAND flash. Those technologies include Memristor, ReRam, Racetrack Memory, Graphene Memory and Phase-Change Memory.
IBM's phase-change memory chip uses circuitry that is 90 nanometers wide and could someday challenge NAND flash memory's market dominance.
"What is happening across the industry with multiple competing technologies to NAND flash is the memory that goes into SSDs today will be replaced by something very close to the performance of system memory," Pappas said. "So now, it's the approximate speed as system memory, but yet it's also nonvolatile. So it's a big change in computing architecture."
For example, last year IBM announced a breakthrough in phase-change memory that could lead to the development of solid-state chips that can store as much data as NAND flash technology but with 100 times the performance, better data integrity and vastly longer lifespan.
SNIA's Non-Volatile Memory (NVM) Programming Technical Working Group, which includes a who's who of hardware and software vendors, is working on three specifications. First, the group wants to improve the OS speed by making it aware when a faster flash medium is available; secondly, it wants to give applications direct access to the flash through the OS; and lastly, it wants to enable new NVMs to be used as system memory.
"Most significantly, when you use non-volatile memory in the future, you can use as part of it for your memory hierarchy and not just [mass] storage," Pappas said.
Among the companies backing the specifications effort is IBM, Dell, EMC, Hewlett-Packard, NetApp, Fujitsu, QLogic, Symantec, Oracle, and VMware.
NAND flash accessed like hard drives today
Today, a processor accesses system memory (DRAM) directly in hardware through a memory controller. The memory controller is usually integrated into the microprocessor chip. There is no software necessary. It is all performed in hardware.
By contrast, a microprocessor talks to NAND flash the same way that it accesses a hard drive. It does that through operating system calls which in turn drives the traditional storage software stack. From there the OS then transports the data to or from the flash memory (or hard drive) over independent storage interfaces such as SCSI, SAS or SATA interface hardware.
Once next generation NVM arrives, the interface will change; that is a product implementation decision that is outside the scope of the SNIA NVM Programming Technical Working Group, Pappas said.
For example, one popular method that is already being used in multiple products today is connecting NVM directly to the PCI Express (PCIe) bus, which is usually directly connected to the processor.
Solid-state memory vendor Fusion-io is among more than a half dozen companies selling NAND flash PCIe cards for servers and storage arrays. The company has also been working on software development kits and hardware products that will eventually allow its NAND flash cards to be used as system memory and mass storage in the same way SNIA's specifications will for the industry at large.
Microsoft and Fusion-io have been working to develop APIs enabling SQL databases to use what Fusion-io calls its Virtual Storage Layer (VSL), which in turn allows developers to optimize applications for Fusion's ioMemory PCIe cards. Like any OS, SQL Server still uses NAND flash like spinning media, using a buffer and writing data twice to ensure resiliency.
Fusion-io calls its interface effort the Atomic Multi-block Writes API. The API is an extension to the MySQL InnoDB storage engine that eliminates the need for a buffer or redundant writes, giving the application direct access to -- and control of -- the NAND flash media.
"If we architect it to act like memory, and not like disk, we can do block I/O [reads and writes] and memory-based access," said Gary Orenstein, senior vice president of products at Fusion-io. "The APIs say to SQL, 'You have more capability than you think you have."
The result is a 30% to 40% improvement in SQL database performance, half the number of writes, and twice the life for the NAND flash because it is storing half of the data it typically would, Orenstein said.
"We're not saying flash will replace every instance of DRAM, but developers will have 10 times the capacity of DRAM at a little less performance and a fraction of the cost and power," Orenstein said.
Products using the Atomic Multi-block Writes API are expected within a year, Orenstein said.
Through new APIs, Fusion-io's 10TB ioDrive Octal PCIe module could someday play a dual role of system memory and mass storage.
How NVM has affected data centers
To understand the impact of NVM in a data center, it helps to look at what was there before it: hard drives and volatile system memory or DRAM. DRAM is extremely expensive and is volatile, meaning it loses all data when powered off unless it has a battery backup.
DRAM has about six orders of magnitude the performance of hard drives, or about one million times, according to Pappas. In 1987, when NAND flash entered the picture, it offered a middle ground with about three orders of magnitude better performance than of disk drives, or about 1,000 times faster, Pappas said. Until recently, however, flash was not cheap enough to use as a mass storage device in servers and arrays. Now that it is, its popularity is soaring.
Hardware manufacturers now use NAND flash as an additional tier of mass storage that provides faster performance for I/O-hungry applications such as online transaction processing and virtual desktop infrastructures. But NAND flash is typically not used as system memory, meaning a CPU does not access it as directly as it does DRAM memory.
Today, storage infrastructures are built based on the performance of hard disk drives. SNIA's efforts will promote an infrastructure that supports the type of performance that NVM can offer.
SNIA's NVM Programming Technical Working Group was formed in July and promotes the development of operating system enhancements to support NVM hardware. "We're focusing on that shared characteristic of this next-generation memory. So we don't need to care which particular technology wins, we just need to design an infrastructure that is capable of using what that replacement technology will be," Pappas said.
How new specifications address NVM performance
SNIA's working group will first focus on optimizing OSes, so that software platforms and the file stack recognize when faster media is available.
The idea behind the effort is to figure out how to speed up the performance of an OS so that any application would also benefit from the performance boost.
"Another aspect not available in storage systems today is intelligent interrogation of what the capabilities of the storage is," he said. "That's pretty rudimentary. How can an OS identify what features are available and be able to load modules specific to the characteristics of that device."
Secondly, the task force will work on new interfaces through the OS to applications, which would allow applications to have a "direct access mode" or "OS bypass mode" fast I/O lane to the NVM. A direct access mode would allow the OS to configure NVM so that it's exclusive to an application, cutting out a buffer and multiple instances of data, which adds a great deal of latency.
For example, an OS would be able to offer a relational database application direct access to NVM. IBM with DB2 and Oracle have already demonstrated how their applications would work with direct access to NVM, according to Tony Di Cenzo, director of standards at Oracle and a SNIA task force member.
By far, the most difficult job the task force faces is the development of a specification that allows NVM to be used a system memory and as mass storage at the same time.
"This is still a brand new effort," Pappas said. "Realistically, the [new NVM] media will take several years to materialize. So what we're doing here is having the industry come together, identifying future advancements ... and defining a software infrastructure in advance so we can get full benefit of it when it arrives."
NAND flash increasingly under pressure
Although new NVM technology will available in the next few years, NAND flash is not expected to go anywhere anytime soon, since it could take years for new NVM media to reach the price point of NAND flash. But NAND flash is still under pressure due to technology limitations.
Over time, manufacturers have been able to shrink the geometric size of the circuitry that makes up NAND flash technology from 90 nanometers a few years ago to 20nm today. The process of laying out the circuitry is known as lithography. Most manufacturers are using lithography processes in the 20nm-to-40nm range.
The smaller the lithography process is, the more data can be fit on a single NAND flash chip. At 25nm, the cells in silicon are 3,000 times thinner than a strand of human hair. But as geometry shrinks, so too does the thickness of the walls that make up the cells that store bits of data. As the walls become thinner, more electrical interference, or "noise," can pass between them, creating more data errors and requiring more sophisticated error correct code (ECC). The amount of noise compared to the data that can be read by a NAND flash controller is known as the signal-to-noise ratio.
The processing overhead for hardware-based signal decoding is relatively high, with some NAND flash vendors allocating up to 7.5% of the flash chip as spare area for ECC. Increasing the ECC hardware decoding capability not only boosts the overhead further, but its effectiveness also declines with NAND's increasing noise-to-signal ratio.
Some experts predict that once NAND lithography drops below 10nm, there will be no more room for denser, higher-capacity products, which in turn will usher in newer NVM media with greater capabilities.
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is email@example.com.
Read more about data storage in Computerworld's Data Storage Topic Center.
Join the Computerworld Australia group on Linkedin. The group is open to IT Directors, IT Managers, Infrastructure Managers, Network Managers, Security Managers, Communications Managers.
Verizon, Jennifer Lopez partner on Latino-focused wireless stores
Santos migrates to Windows 7 before XP support ends
Australia remains black spot for Vodafone
WikiLeaks Party closer to registering
AusCERT 2013: NBN users need security professionals’ help, says Google