Five steps to information lifecycle mgmt.

HP has been talking about information lifecycle management for a while, and a skeleton of its strategy has been in the public view for about a year. Last week, the company put a bit more flesh on the bones.

Now I know that some of you at this point might be wondering why an ILM strategy takes so long to develop. We have, after all, been talking about it for two years now and, at first blush, ILM looks like hierarchical storage management (HSM), which has been with us for about 20 years.

As many of you understand however, HSM is not at all the same thing as ILM, and only represents a single subset of all the various technologies that are a part of ILM. Thus, even though HSM has been kicking around since the late 1980s courtesy of IBM's mainframe (now z/OS) group, the fact that we have been able to manageably demote mainframe data from disk to tape media represents at best only a beginning.

To be sure, HSM was a valuable first step. However it was only that, and it never addressed a number of major issues that have become increasingly critical as data has piled up in the data center. Particularly important are concerns such as how to get data back on line rapidly, how to provide multiple tiers of both spinning media and the services that manage them, and how to provide the same level of data management to non-mainframe IT environments.

This last is a particularly significant point. Given the number of enterprise open systems and Windows sites these days, not to mention mixed environments, it is obvious that vendors able to offer increased economies of management to all levels of IT installations will have a huge market to serve. HP intends to address most of this market.

The company sees ILM as a five-step, evolutionary process on the road to what it likes to refer to as the adaptive enterprise ("adaptive enterprise" is HP's term for what its competitors prefer calling "utility" or "on-demand" computing - HP on many occasions uses these terms as well).

In step No. 1, data is discovered and classified. At this point data is inventoried and typed. This is also when content is discovered and is assigned metadata based on IT policies.

Step No. 2 involves assigning storage to tiers which, broadly speaking, are defined as online, active archive, and offline archiving to tape.

During step No. 3, policy-based data migration occurs. Data automatically moves off high-end storage when its lifecycle stage warrants such movement. QoS of improvements begin to occur as high-end storage becomes more readily available. It is at this stage that cost reductions begin to appear.

In step No. 4, the information is made continuously available. Content is indexed and searchable, with backed up content readily accessible from virtual tape.

Step No. 5 sees storage management becoming completely application-aware, tuned to the special needs associated with whatever applications are being used.

These are the fundamentals. Next time, we'll look at how HP products and services are likely to play in the ILM space.

Join the newsletter!

Error: Please check your email address.

More about HPIBM Australia

Show Comments