The modern enterprise is an amalgam of solutions forged by necessity, and the demands of meeting a company's day-to-day requirements often obscure priorities. When it comes to storage, users' only concerns are getting more than what they have and making that faster. It falls to IT managers to balance these immediate needs against their larger goals, which garner attention only in times of crisis: keeping costs down, increasing manageability, and improving recoverability. Obtaining funding and manpower to meet the last of these objectives was a tough sell until last September.
In general, if any real planning was done, IT leaders focused on disaster recovery. If your IS staff made backups every few days and they encouraged users to copy their critical files to removable media, you had a plan. As long as tapes purported to contain the data that might be wiped out by a disk crash or an errant application, no additional planning or expenditures were needed.
But disaster recovery strategies generally anticipated only minor disruptions. The attacks on the World Trade Center last year changed our thinking in a way that countless earthquakes, tornados, fires, and floods have not; 72 percent of respondents to the InfoWorld Networked Storage Survey report that the events of Sept. 11 have affected their business continuity planning. Overnight, it became clear that there are some disasters from which recovery, in the accepted sense, is impossible.
As grim as the task is, companies can either plan for the worst or accept that a single catastrophe could take the business under. That is the crux of business continuity planning, and it's a vital part of mapping out a long-term storage strategy.
Respondents to InfoWorld's March 2002 storage survey expect to increase total storage capacity by 28 percent during the next 12 months. The usual reasons for needing more space -- more systems, more staff, more applications -- aren't as prevalent now. The rate of capacity growth is high despite the slow economy.
One factor driving this need is the shift away from offline storage. The Networked Storage Survey shows that CTOs are still investing in backup tapes and optical discs as means of protecting their company's data. But it also shows the rapid rise of secondary online storage: 58 percent of our respondents report that they use some form of online backup; 45 percent use storage replication or are planning to implement it; and 31 percent have invested in some type of off-site replication.
The planning paradigm shift from disaster recovery to business continuity is driving the push toward online backup. Also, IT leaders have a growing sense that the protection offered by offline solutions is inadequate.
Years ago, business data processing was largely an adaptation of the paper model, and paper records were still used as backup. Now, business operates in real time, and increasingly, bits on disk are the only record a company has of its transactions. The cost of lost data, even if it's just the few hours of transactions recorded since the last incremental backup, is too great to rely on offline backup solutions.
The tempo of business is so revved up that there is very little data that users can afford to be without, even temporarily. Ten years ago, a company could lose every computer in its building and still take calls, process orders, and post payments. Now, the instant that network storage goes down, business stops. Like kids who can't do long division without calculators, employees will no longer resort to pen and paper when their systems are down. Customers are waved off and every time-sensitive operation falls behind schedule. For some companies, the cost of downtime, even if it is legitimately invested in recovering data, is incalculable.
Another factor that complicates storage management is "criticality creep." Applications mine historical data looking for customer trends, red flags, anything that could give a company an edge. The further back that history goes, the more useful the analysis will be, so companies have no reason to apply an expiration date to their stored information. If data is gathered, it will probably live forever. Users no longer regard e-mail as an adjunct to the telephone and the intracompany mail service; it has become a business's lifeblood. No one voluntarily archives their messages to offline media any more. Searches through in-boxes containing tens of thousands of messages have become commonplace. Users who loses their in-boxes could lose travel plans, appointment calendars, contact lists, product information, and more -- in essence, all the information they need to do their jobs.
Online backup, implemented through real-time replication, is the only way to survive a catastrophic failure without losing any data. IT leaders have increased their use of storage replication software because it will automatically switch users and applications to secondary storage when the primary pool goes down.
If a business operates around the clock, automatic fail-over is essential. The time it takes to page an administrator to make a manual recovery is too long. In fact, when asked to identify what type of backup tape they prefer, 49 percent of survey respondents actually say they prefer disk, second only to DLT (digital linear tape) at 51 percent.
Old storage habits are hard to break, even if new technology and new approaches offer tangible benefits. The almost-universal use of tape backup is encouraging, but it's somewhat disheartening to learn that, according to the Networked Storage Survey, 47 percent of all tape drives are stand-alone models. That means there are still too many servers with stacks of tape cartridges balanced on top, a poor substitute for real data protection.
The survey also shows that although business continuity planning is on the rise -- 37 percent of survey respondents have plans in place and 52 percent consider it a priority -- the majority of companies still rely on disaster recovery strategies instead. And SANs (storage area networks), for all their resilience and scalability, remain a minority purchase: Only 20 percent of surveyed companies use SANs today, whereas a surprising 36 percent say SANs do not figure in their plans at all.
What's keeping SANs down? Survey respondents cite the technology's high cost as well as their own lack of desire to consolidate storage. That latter point is an important one and reflects a larger change in attitude. Not long ago, consolidation was seen as an advantage, but continuity planning favors distributed resources. Outside of a SAN, replication can be achieved by deploying multiple redundant servers that use DAS (direct attached storage) or NAS (network attached storage) and share data over high-speed links. That seems to be the trend: 59 percent of survey respondents say they have no plans to consolidate their servers.
Despite the fact that vendors are inventing ever more ingenious and expensive ways to manage storage, corporate IT prefers to apply the technology it knows -- SCSI, Ethernet, tape, and optical -- to meet growing and changing requirements. At least for now, achieving business continuity goals means buying more disks, servers, and NAS, as well as investing in off-site storage services.
Stretching the reach of storage networksTraditionally, SANs have been limited to campus or metropolitan networks, due in part to the restrictions imposed by the underlying technology. After all, FC's (Fibre Channel's) native 6-mile distance limitation often doesn't even allow for a connection from a city core to a suburban datacenter. Adding DWDM (dense wavelength division multiplexing) to FC pushes it out to 60 miles, which may be long enough for business continuity and disaster-recovery purposes, but falls far short of interurban requirements, especially in North America.
Nevertheless, several factors are coming together to make continental and even global storage networks, if not yet a reality, a tantalizing promise of how the networking and storage industries might converge during the next decade.
First and foremost is the availability of increasingly faster data networks, represented best by 10-Gigabit Ethernet technology on the LAN side of operations and OC-192 and faster optical circuits for WAN connections. The sheer volume of traffic contained on future SANs -- respondents to the InfoWorld Networked Storage Survey indicate that a brave few are already handling several hundred terabytes -- dictates that data networks will have to be capable of speeds that engineers are just starting to explore.
Moving up the protocol stack, IP-based storage clearly represents the future, due in no small part to the already substantial number of network engineers familiar with its foibles. The question is which IP it will be. It's likely that IPv6 will dominate storage networking long before it can overtake Version 4 in general-purpose data networks.
From a hardware perspective, the trick for vendors will be lowering the cost of SAN equipment, which will probably take five years to 10 years as standards evolve and once-big-ticket specialized gear becomes a commodity. The chore of managing even commoditized gear is another reason why IP and those familiar with its care and feeding will be the future of storage networking. From a reverse angle, it shows why people who have concentrated on storage are well-advised to polish their network management skills.
Some vendors such as Cisco Systems Inc. and EMC Corp. are obvious leaders in future global SAN efforts, but it's unclear who else will survive the shakeouts expected in the networking industry in the coming decade. Given the limited appeal of desktop gigabit networking to most CTOs in a time of constrained IT budgets, storage is likely to be one of few growth areas for network vendors. Although cost is the primary concern of survey respondents implementing SANs, it hasn't scared them off.
Expanding the reach of storage networks from crosstown to worldwide won't be easy, but in one sense, the groundwork is already done -- a lot of dark fiber that was buried in the past few years is begging for customers. Pulling the other pieces together just takes time.
-- P.J. Connolly.
THE BOTTOM LINE
Business continuity planning
Executive Summary: As the value of data and the cost of downtime rise, CTOs are devising business continuity plans that include online backups and off-site storage. Tape and optical still play a vital role in data protection, but nothing covers all potential failures as well as real-time replication.
Test Center Perspective: Business continuity should replace disaster recovery as an IT objective for all but the most cash-strapped organizations. Distributed storage does not obviate the need for a consistent, centralized plan for preserving data and services in the event of a catastrophic loss of facilities.