No longer seen as SAN's poor relative, today's network storage appliances are robust and capable.
Network-attached storage (NAS) is quietly transitioning from an ad hoc, departmental storage add-on to a serious top-down enterprise storage resource. This highly reliable file server approach has always been relatively inexpensive and easy to configure and manage. But now, a single NAS server accommodates tens of terabytes of data, and NAS systems sport more efficient backup technology and support applications more efficiently over today's faster Gigabit Ethernet networks. What's missing? Better tools to manage across distributed NAS resources.
Bigger and Bigger
NAS takes on new roles as its capacity and scalability increase.
While storage-area networks (SAN) have been getting all the attention, network-attached storage (NAS) has been quietly breaking all the rules. Essentially a plug-and-play disk storage subsystem with embedded file-serving software, NAS technology was originally seen as an easy way to add a few hundred gigabytes of storage to a LAN. Two years ago, such boxes might have scaled to 0.5TB. Today, a single NAS system may support as much as 30TB.
Data backup was supposed to be a big problem for large NAS devices due to network congestion. But thanks to adoption of the Network Data Management Protocol standard and technologies such as Sunnyvale, Calif.-based Network Appliance Inc.'s SnapShot feature (which creates a copy, or "image," of the file system and associated disk block mappings), that's changed. Today, you can run NAS backups over high-bandwidth Gigabit Ethernet networks quite efficiently.
NAS devices have traditionally served up files to end users, while SANs have allowed application servers to access disk storage without a server intermediary and send it in efficient block-data format over Fibre Channel, a dedicated, high-speed serial interconnect. But NAS is now well established for adding capacity to e-mail servers, Web server farms and SQL and Oracle databases, where clusters of NAS devices can increase availability and throughput.
Data center applications such as online transaction processing still work better on SANs, which offer high scalability and faster performance, especially for large files or where storage traffic is heavy. But even "some block-level applications, like an Oracle or IBM DB2 database, are adopting a file system to replace the block storage technique," says Jon Toigo, an independent storage consultant in the Tampa Bay area. "There is a certain convergence going on, and NAS is well positioned to take advantage of it."
NAS servers are relatively easy and inexpensive to install, and users say they've been highly reliable. "NAS is great because you can distribute it all over the place, and you can manage it from one location," Toigo says.
NAS is not without its limitations, however. Most servers use a Web management interface, which works fine for a single device. But as the systems reach capacity, administrators must add new NAS systems. The problem: "You've got to surf the [individual] Web pages associated with each NAS box to maintain them. That's a hassle," says Toigo.
Then there's storage virtualization. A single NAS server allows multiple disks to appear as a single virtual storage volume to end users and application servers. And administrators can add hot-pluggable disk drives to the system and expand the virtual volumes on the fly, without disrupting applications using them. But these volumes typically can't span multiple NAS servers, let alone work with other vendors' NAS devices, and that limits scalability. A few start-ups have begun offering that capability, but the technology hasn't yet trickled up to the market leaders.
"NAS is a great out-of-the-box solution, but you don't have a lot of options in configuring it," says Mike Carrato, a partner in Accenture Ltd.'s New York office. That's one reason organizations aren't loading anywhere near 30TB on a single NAS device today. But users are looking for new ways to use this relatively inexpensive, high-capacity storage, and vendors are looking for new ways to accommodate them. Ultimately, analysts say, IP SANs will offer a single, virtualized storage pool for both NAS and SAN functions. Until then, users will have to make do with SAN/NAS hybrids like IBM's NAS 300G, which places a NAS front end on an IBM Shark Fibre Channel SAN to convert client file requests to block data requests.
Getting Big Files Into a Small Box
Forest Oil is a US$2 billion independent oil and gas exploration and production company.
Goal: Allow geologists and geophysicists to share 1.5TB of seismic data files; consolidate backup of seismic data and an Oracle database; improve backup performance; accommodate the purchase of large quantities of additional seismic data files with minimal disruption.
Challenges: An existing 5-year-old NAS device maxed out at 1TB using older 9GB disk drives. Growth in data storage needs was stretching backup times. The amount of floor space available to accommodate more boxes was limited.
Strategy: Replace existing NAS unit with Auspex Systems Inc.'s NS 3000 NAS device with 2TB of capacity; migrate to Veritas Software Corp.'s NetBackup version supporting the Network Data Management Protocol (NDMP); migrate Oracle data to NAS; deploy everything on Gigabit Ethernet.
Issues: Performance differences were a key differentiator in choosing a NAS device, says Patrick Murphy, manager of IT. "Typically, seismic data comes in 40 or 50MB files," he says. By working with peer organizations and running comparative tests using his applications, Murphy found that performance varied by as much as 20 percent for his application mix.
Advice: Compatibility counts. "You have to be concerned that the backup software, filer head and backup hardware will work together," Murphy says.
Payoff: The new system provides more storage in a smaller footprint and allows volumes to scale to 10TB without interruption. Performance improved 10 percent to 20 percent, and backup times were cut in half. "The newer box supports NDMP," Murphy says. "It dropped our backup time by more than 50 percent. . . . That's mostly attributable to NDMP.
NAS by Class
Enterprise-class NAS: For enterprise-class network-attached storage, EMC Corp.'s Clariion and Network Appliance Inc.'s NetApp filer servers still rule. The devices can scale to nearly 30TB and offer the most advanced management tools including the ability to work with enterprise network management tools like Computer Associates International Inc.'s Unicenter TNG and Hewlett-Packard Co.'s OpenView. "EMC has the leading storage management software in Control Center. They're very well placed to serve whatever you need to do if you can afford them," says Mark Roberts, principal of Dataphile Consulting in Austin, Texas. These vendors also offer enterprise-class support programs and proven reliability.
Departmental filers: Filer servers from vendors such as Quantum Corp. and Maxtor Corp. offer an easy way to add inexpensive storage in remote offices, departments or small to midsize businesses. They're good for adding a quick 300GB of storage on a departmental LAN but are more unwieldy to manage as the number of NAS servers grows. If you're using NAS devices to consolidate Windows servers, look for systems that use Microsoft Corp.'s Common Internet File Services (which supports Active Directory) rather than the Unix-based Network File System.
The innovators: Start-ups are the great innovators in NAS technology. For example, LeftHand Networks Inc. in Boulder, Colo., and Tricord Systems Inc. in Plymouth, Minn., can virtualize and centrally monitor storage across multiple NAS servers, creating highly scalable storage systems that can be distributed for greater fault tolerance. And Tek-Tools Inc. in Dallas is working on XML-based agent technology that will allow management of multiple vendors' NAS boxes from a single management screen.
Storage Appliances: Not All Plug and PlayNAS appliances seem easy, but keep your hands on the wheel when using them for mission-critical application servers.
Don't tell Dan Rosman that NAS devices just plug and play. His first clustered NetApp servers from Network Appliance Inc. wouldn't work with his warehouse application server, until he discovered that the HP-UX-based system needed a patch to rectify compatibility problems with the Network File System (NFS) file-sharing software the NAS server uses.
After that, Rosman, IT director at Fairfield, Calif.-based Jelly Belly Candy Co., ran into trouble configuring storage virtualization. The NetApp server presented its storage as a network file share, but the Oracle database software and warehouse management application wanted a drive letter. Rosman mapped the drive letter to the new share name, but "every time a user would log off, it would break all those mappings and the system would stop working," he says. He ended up going through the registry settings for the Oracle software and the application and removing all references to the drive letter. (Network Appliance says it now lets the NAS appliance appear as a drive letter.)Rosman then set up NetApp's SnapShot feature, which takes a snapshot image of the file system and data every four hours, allowing for rapid rollback in the event of data corruption. But the multiple images take up valuable disk space. Rosman would like to take snapshots of other applications in the same volume only once a day, but the technology won't support that.
Recently, Rosman had a snapshot problem that resulted from administrator error. "Instead of letting the system do it, we took some manual snapshots and left those in the production volume," he says. Those images continued to consume more disk space until "we filled up one of the filers and the database stopped." Rosman had decided to pass on Network Appliance's monitoring software in favor of a plug-in for his HP OpenView management software, but he hadn't yet implemented it.
Rosman says the NetApp file servers are "very easy to manage." He says he also found support and performance better than with server-attached storage, thanks in part to NetApp's 1GB cache. But appliance or not, the system requires careful management when used for mission-critical applications.