Nothing new under the SAN

I'm a relatively new columnist for Computerworld, but I have a 16-year history with its publishing parent, International Data Group (IDG). I've spent most of that time writing for sister IDG publication InfoWorld, but I also started the Web publication LinuxWorld and ran the short-lived IDG Web publication Network Computing World.

Before I explain why I've brought this up, I have to tell you first how pleased I am that all of my months of begging Editor in Chief Maryfran Johnson to bring me on board finally paid off. I'm thrilled to be part of Computerworld.

I mention this not only for the self-serving purpose of sucking up to management, but also for the self-serving purpose of saying "I told you so" with more authority than I've earned with the few columns I've written for Computerworld. But if that isn't authority enough, you can go as far back as the Bible's book of Ecclesiastes, which also told you so. It says that there's nothing new under the sun. (It says a lot more than that, but we can stop there for the purpose of discussing storage technology.)More specifically, I've said for a long time that from the moment we hooked up one PC to another, we began to face problems that would eventually force us to reinvent the mainframe model of computing. Take hierarchical storage management (HSM). It hit the radar screens of PC-centric data centers a few years ago, but it's nothing new to mainframe mavens.

In case you were out sick that day, HSM magically migrates some of your aging data from expensive storage to cheaper storage without users having to know that anything moved. The only thing users notice is that the spreadsheet they haven't touched in a year takes longer to load because the HSM software snuck it onto a tape, leaving only the illusion of the file behind.

Network-attached storage (NAS) and storage-area networks (SAN) pushed HSM off the front page, but they're nothing new either. NAS products are cool because most of the time, you just plug them in and start using them, with a minimum of effort. But under the hood, you'll find a normal operating system, usually a flavor of Unix, running a Swiss Army Knife of network file server protocols.

Multiple protocols and file formats are the reason why HSM was so easy to upstage. It's hard enough for a NAS appliance to provide all these services in one box. Imagine how difficult it is to build an HSM solution that has to deal with the details of every client and server in today's typical decentralized and heterogeneous environment of Unix boxes, AS/400s, Windows clients and so on.

This is the problem that spawned the Storage Network Industry Association standards organization, as well as the Network Data Management Protocol (NDMP) standard, which is what many backup and data management software products use.

The point of NDMP is to make it possible for developers to focus on creating better data-management software instead of worrying about evolving file systems and network protocols. The burden rests upon the vendors of storage appliances to figure out how to support NDMP, whether they add it as a layer on top of NFS or Common Internet File System (CIFS) or build it right into the operating system.

It's a long shot, but I'm counting on the success of SANs to pressure vendors into creating a protocol like NDMP to replace protocols like NFS and CIFS entirely or adopt and advance just one of them. There's a common thread across all these storage issues: insulation. HSM archives files while insulating people from having to know where the file is stored. NDMP insulates data management developers from dissimilar file system features and protocols. Likewise, the ideal SAN should insulate administrators and users alike from the details of network file systems and protocols.

But even if the typical SAN ends up supporting multiple protocols, it's still a way to aggregate dissimilar islands of data into a whole while making many of the details of each island transparent. In short, a SAN provides some of the virtual centralization of data that's needed to make data more manageable. And centralization, whether real or virtual, takes us one step closer to re-creating the mainframe computing model.

And that's one step closer to my being able to say I told you so.

Nicholas Petreley is a computer consultant and author in Hayward, Calif. He can be reached at

Join the newsletter!

Error: Please check your email address.

More about Island

Show Comments

Market Place