Have you read "The Expanding Digital Universe"? It's a study, commissioned by EMC and put together by IDG analyst group, IDC, on the amount of digital data that we can expect to see in the next few years.
I don't know if those global predictions will prove to be correct year after year, give or take an exabyte , and frankly it doesn't matter. What matters is how much data your company is going to create and how you are going to store and manage it.
We know from past experience that blindly purchasing more capacity just pushes the problem back without attempting to solve it. Sure, you can keep buying more storage arrays if the budget allows, but at some point you will meet an insurmountable wall, such as running out of floor space in your datacenter or hitting the limits of the electrical and cooling systems.
If -- or rather, when -- you reach one of those walls, you face spending millions of dollars to expand or move the computer room, before you can even begin to add more capacity.
What's the alternative? Unfortunately, technology is not keeping up with our capacity demands. For our long-term, nontransactional data, we desperately need a storage medium that can perform faster than tapes or optical disks and is less energy- and space-hungry than disk drives.
Could that be holographic storage? Perhaps. But until a technological revolution happens, our best response to the data deluge is to intelligently categorize our data and move the bulk over less expensive, high-density arrays -- even tapes when appropriate -- while keeping only a minimum amount on the power- and space-hungry first-tier devices.
You will hear this topic mentioned quite often in the coming year, but what got me started this week was Quantum's announcement of StorNext 3.0, a new version of its powerful data mover solution. It puts a novel twist or two on managing data across different storage tiers.
For example, StorNext offers a powerful parallel file system with agents that allow client access from any major OS. Traditionally, StorNext was deployed when companies required top performance for accessing and sharing large files.
I am intrigued by the fact that the new version adds support for less demanding applications over plain Ethernet, which means that the same files can serve both high-performance computing and normal users' requests from the same environment. However, the new clients will have access over different network links and install a different agent that, according to Quantum, performs faster than CIFS or NFS alternatives. StorNext also has a robust system of policies to automatically move data across different tiers and enforce or specify criteria such as redundancy.
Another unique aspect is that StorNext 3.0 takes advantage of deduplication in moving data across different tiers, which translates into less space used because redundant chunks of information are stored only once.
Quantum's deduplication (a technology brought in when Quantum acquired ADIC) and Data Domain's solution share a more flexible and more efficient approach with variable-length chunks, but the two companies share even more than that. "According to the terms of the agreement [that we signed], each company has a license to the other's patents (deduplication and other nontape, data storage technologies) on a nonexclusive, worldwide basis," says Sean Lamb, manager of public relations for Quantum.
I don't know all the details of that agreement, but this week Data Domain took the first steps toward an IPO and has agreed to "compensate" Quantum with 390,000 shares of its common stock, according to Lamb.
I'll leave it at that, at least until we know more, but the latest news from these two vendors prove that data deduplication is one of the weapons to keep in mind when fighting the data deluge that awaits us. Data Domain and Quantum did not overlook that technology, and neither should you.