Storage: More please, sir?

An inadvertent casualty of the information age are the managers who have to deal with a deluge of information produced by modern information systems. For insurer Norwich Union Australia — a $16.8 billion subsidiary of $572 billion UK investment and insurance group Aviva — this challenge has become a significant strategic concern as the company expands its 200,000-strong customer and product base.

A recent upgrade to the company’s storage area network (SAN) has doubled its storage capacity, growth that “by our projections is going to cover our data growth for at least the next two years”, says Martin Meldrum, storage infrastructure technician with Norwich Union. “Eighteen months ago, it was perfect; we had heaps of SAN disk space and ports left. But our storage requirements have grown 200 per cent since then, and we needed to sit down and realise what we were going to do. This has been one of the bigger projects we’ve ever done.”

New SAN technologies have provided a much-needed reprieve for Norwich Union, which like most companies is facing constant pressure to expand its storage base and the efficacy of its storage management. Yet SANs, which have emerged in recent years as the solution to the growing corporate demand for data, are still expensive and complicated to set up. They incorporate several key elements including fibre-optic Fibre Channel cabling; redundant server, disk and tape library connections; SAN switches that are used to direct and apportion SAN resources; and other emerging bits and pieces.

It’s a far cry from the early days of computing, when centralised mainframe and mid-range disk arrays were the only form of storage available to end users and applications. Recognising the inherent limitations on that disk capacity, mainframe administrators zealously monitored end-user consumption of the resource and deleted or archived old data with impunity.

The client/server revolution changed all that, with data distributed on hundreds or thousands of relatively inaccessible desktops across the organisation. Many storage managers may have tried to keep data centralised so it could be easily backed up and managed, but inevitable data leakage — the effect of users’ often less-than-stellar data management habits — made client/server models a disaster for data integrity.

Some applications provided a way of sucking data off remote desktops, servers and occasionally-connected notebooks, but the global cultural shift towards mobile computing has exacerbated problems in maintaining data currency that may never be completely eliminated.

Yet physically keeping a handle on all of a company’s data is only one challenge facing IT managers; even more pressing is the sheer volume of data under management. Phil Sargeant, Gartner’s research director for servers and storage, believes storage requirements will grow at around 75 per cent annually for the next three to four years — a trend that looks set to keep storage strategies near the top of IT executives’ strategies.

“Simply the amount of data that we’re generating, combined with some of the newer data types like video, audio and the research data being generated in life sciences, are all big drivers for storage,” Sargeant said. Growth is creating a lot of management headaches because they can’t afford to add storage administration personnel at the same rate they’re growing their capacity.”

The very structure of a SAN allows backups to be completed much faster since they don’t have to compete with enterprise applications for limited network bandwidth. It’s the open-systems version of the mainframe, which long ago had ESCON-attached storage all to itself, but the switched SAN mediates the conversations between server and storage elements. SANs haven’t been all wine and roses, however. With proprietary technology still hindering interoperability and management tools adding features to match the mainframe paragon, users often find that building and administering a SAN requires a whole new set of skills — in much the same way that conventional data networking did a decade ago.

Hierarchical storage management (HSM), a concept borrowed from the mainframe world, is now viable on open-systems platforms. Storage appliances allow administrators to add more storage with little more effort than plugging in the box.

And an emerging genre of near-line storage devices — which contain huge numbers of slow, inexpensive IDE hard drives — has provided a faster backup method than simply relying on tape, which is struggling to keep up with databases now commonly measured in the terabytes.

The constant race between applications’ data demands and storage systems’ ability to meet those demands is by no means over. Ever more-sophisticated applications involve more components and more data than ever, and increasing use of business tasks such as customer relationship management, service utilisation reporting, and supply chain management is demanding larger and larger collections of historical transaction data.

Fortunately for users, the cost of storage is not so much an issue as the time spent managing it: although IDC reported the amount of storage space shipped worldwide grew by 49 per cent in the first quarter of this year (to 175.6 petabytes), vendors are actually watching revenues from those disks decrease as the cost per gigabyte continues to plummet.

Software, which is critical to storage management and can differentiate one vendor from another, will be the key to revenue recovery — which should see customers given access to ever more intuitive and self-managing storage in the future.

Join the newsletter!

Error: Please check your email address.

More about GartnerIDC AustraliaSIR

Show Comments

Market Place