Autonomic computing, the built-in ability of a system to configure, heal, protect and optimize itself, is something we would all surely like to see in the near term. Unfortunately, it's unlikely that soon the vendors will be beating down our doors with products that can do all of what we want. On the positive side, some vendors really have been working on this stuff for a while now, and we are sure to see some early deliveries on some of the promise of this technology within the short term.
But what can we look forward to? Here's an overview of what is coming in the area of self-configuring and self-optimizing storage systems.
First, we need to understand that the two issues of self-configuration and self-optimization are very closely related, as optimization relies on an understanding of configurations and the events that impact them, and the configuration function must constantly track all changes that occur within the system it safeguards.
Essentially, self-configuring and self-optimizing systems must do two things. First, these systems must autodiscover everything and then, in order to do their job, the systems must constantly twiddle the dials.
We use multiple vendors and many different kinds of elements in our storage systems these days. Because of that, any kind of management software is going to be pretty thin soup if it only works with devices and software from one or two vendors.
Obviously no vendor can do this alone. So, expect the tentative sharing of APIs among the major storage vendors to step up the pace a bit. Sharing APIs is a good start. So would be supporting the Storage Networking Industry Association's Bluefin standard. The goal must be to include not only everything in the data path, but also everything that influences the data path as well.
Our environments change dramatically, sometimes in unexpected manners. It stands to reason then that our management systems should adapt to meet those changes. An autonomic storage management system should perpetually scan for events (hotspots, brownouts, failed backups, a sudden need by the Accounts Payable group to run the check writing program, etc.,), diagnose problems down to their root cause, and respond in the most efficient fashion. As the environment changes, the system should learn to anticipate the impact of those changes, and, as it gets smarter, it should learn to proactively intervene so as to preempt negative events.
These two simple rules, discover and twiddle, mean that a management tool will go out and discover the storage area network, the devices (and components on those devices) on the SAN, and become knowledgeable about all the data on the SAN.
It will watch the flow of traffic, calculate the impending demand, and bring added resource online as needed.
The smarter systems will understand that optimization includes optimizing in favor of critical business systems. And the very smartest systems will be sophisticated enough to realize that some systems need to be optimized before others; they will optimize all data storage and data movement, and balance all workloads, so that key business processes can take best advantage of the system.
If this all starts to sound very Darwinian, well, perhaps it is. Management tools will identify the environment, cope with it, and constantly be on guard for a time when they will need to adapt to environmental change.
What we have listed above, as I am sure you suspect, is nothing more than the most basic rules for all software that seeks to manage storage configuration and optimization functions. If these things occur autonomically however - without need for intervention by any admin - many of you just might find yourselves freed up enough to get some serious planning done.