In IT, we tend to like redundancy.
The most fundamental principle of data availability, protection and recovery is redundancy. Media can get lost or fail, data can be corrupted or accidentally deleted, and about the only -- or at least the most straightforward -- way to recover from such loss is from another copy of the data. We deploy redundancy in a multitude of forms and throughout every level in the computing hierarchy: Servers and networks and their components are duplicated; disks are mirrored, RAIDed and replicated; and backups are created and recreated and recreated ad infinitum.
We also like efficiency.
The most often cited initiatives in IT organizations today, including server virtualization, data center consolidation and so-called "green" IT, are all about improving efficiency. Whether eliminating inefficiency in processing, avoiding the purchase of excess bandwidth or reducing technology's environmental footprint, it is generally agreed that there is much to be gained in striving for efficiency.
As with so many aspects of life, the benefits and detriments are highly situational. We like redundancy but only purposeful redundancy, and we like efficiency but not at the expense of compromising security and recoverability. So, in striving for efficiency, it is important to also take steps to ensure that we do not inadvertently increase risk by eliminating redundancy.
The area of backup and recovery illustrates this point. For years, organizations have backed up and retained copy upon copy of data, and they even took some level of comfort in knowing that if a particular tape failed, there was a high likelihood that the same piece of information could be found on another tape, along with mountains of outdated, potentially unusable data. Unfortunately, this was somewhat hit or miss -- if the data had been newly created and just backed up the previous night, another copy might not exist. The redundancy in this case is more coincidental than purposeful.
Perhaps the biggest efficiency breakthrough for backup in recent years is data de-duplication technology, which provides an easy way to minimize the capacity requirements of redundant backup data. However, to some, disk-based de-duplication may be seen as removing the "security blanket" of multiple backup tapes. The truth of the matter is that purposeful redundancy must go hand in hand with efficiency. De-duplicated data stored on a virtual tape library, like all disk-based data, must be protected appropriately through VTL replication, virtual-to-physical tape duplication or some other means.
In fact, for environments that don't replicate and that for years were not adhering to the best practice of duplicating tapes for offsite storage, the introduction of this technology can finally make more purposeful redundancy possible, resulting in both efficiency and improved levels of protection.
Jim Damoulakis is chief technology officer of GlassHouse Technologies, a leading provider of independent storage services. He can be reached at email@example.com.