Let's assume you are in the midst of a budgeting exercise to determine next year's IT allocations. You have done your research, consulted with your operators, and come up with a rough estimate of what your storage requirements will likely be for next year. You have spoken with the various departments who will need your services, figured out what the service levels ought to be, and worked your spreadsheet as best you could.
As a last step in your preparation, you undertook the ultimate self-sacrifice: a long and perhaps excruciating evening of socializing with your boss.
You came away from that night understanding two things. First, you gained a basic insight into life: because bosses make more money doesn't mean they will buy all the drinks. Second, you found out that, once again, the money tree will not be flowering at your company this year. As a result, next year you are going to have to cut some corners. Again.
What happens now?
We have an old expression in New England: "If you can't raise the bridge, lower the river." Or in this case, if you can't find the funds, find a way to lower (or defer) increased demand.
If raising the bridge represents raising new funds to support storage growth, then the trick of how to cope with the new expenses lies in trimming the present flow of IT dollars.
Enterprise Management Associates has looked at the IT expense structure for years. When it comes to storage, it is pretty clear that the division of expense falls along the lines of other areas of IT. Generally speaking, the order of this is:
* "Peopleware," the salaries and benefits of the admins, tape handlers, etc., which frequently represent the lion's share of costs.
* Software, both the purchase of new products and the maintenance paid on existing tools.
* Hardware, again covering purchase and maintenance.
Interestingly, it is often hardware - the smallest of the three cost buckets - that provides the most immediate opportunity for savings. Here's why.
Most pundits will tell you that in terms of storage capacity, 40% usage is what they expect. At EMA, we find that a more useful rule of thumb has a bit more granularity: 40% usage in Windows NT environments, 60% usage in open systems, and often as high as 75% to 80% usage on storage connected to mainframes.
Alas, it's a rare IT manager who has a handle on this.
One obvious consequence of not knowing how much free disk you have is that disk and subsystem purchases are often built into each quarterly budget irrespective of any actual need. Assuming a demand increase for 20% more capacity for each quarter, this means you may be incrementally spending your way into the poorhouse without actually understanding if the need is real or imagined. Such spending is justified according to the theory that it is better to have excess capacity (however much that may be) than it is to run out of disk space.
It seems certain then that understanding how much storage headroom your shop actually has is a quick way to short-circuit those planned quarterly disk purchases. And if funds have been pre-allocated for disk expenditures, you may have the option of redirecting those funds so that they can cauterize another bleeding wound. But how to measure and fix your storage optimization level?
The trick of course will be to identify the disks that are not being used optimally, and to bring their usage up to the level where it needs to be. This may mean any number of things, but typically the route to optimal usage will include at least several of the following:
* Centralizing data accessed by multiple users, and may be duplicated 10 (or 10,000) times.
* Eliminating files that shouldn't have been on corporate systems in the first place (JPGs, personal stuff, and of course, the big "et cetera").
* Migrating older files to near-line, offline or other less costly environments.
* Defragmenting data and paging disks.
* And in the most sophisticated shops, reallocating data to various disks based on both the disks' performance characteristics and on the value of the data to the company.
The trick is first to identify these space eaters, and then to manage efficiently the implementation of the fixes. But particularly if you are a shop storing lots of Windows data (and thus may have 60% of your capacity underutilized), it's a trick worth learning.