Information life-cycle management is grabbing the attention of IT executives, because the discipline promises to reduce the cost of many IT operations through automation and deployment of policy-based IT systems and tools.
Stories by Mike Karp
Many things can spur a company to kick off an ILM project, but two reasons lead all the rest: a desire to implement storage tiers to reduce costs and the need to align corporate IT practices with regulatory compliance demands.
Last week, we introduced the players on either side of the Aperi battle. Today and next time we'll look at content - what there is of it, anyway.
There's trouble in Standardsville, pardner, and it looks like there's a battle a-brewin'. So strap on yer six-shooters, mosey on down to the corral and saddle up them broncs. We got some hard ridin' to do. The Aperi gang may be comin' to town and some of the townsfolk are up in arms.
Some readers may remember efforts during the 1990s by Compaq, HP and IBM to deliver a high-speed serial connection technology called Future I/O. Some may also recall a competing technology -- Next Generation I/O (NGIO) -- from a group consisting of Intel, Microsoft, and Sun. Eventually the two camps merged their efforts to work on what all commonly saw as the next generation of technology for connecting servers and storage.
I usually enjoy the several responses I get to my columns, although sometimes I get taken to task rather abruptly. One example of this happened a few weeks ago when I wrote about the battle going on between iSCSI and Fibre Channel for the hearts, minds and pocketbooks of the storage-area network buying public. I thought I had covered all the angles but then the e-mail arrived and I got smacked across the face with the cold codfish of reality.
Predicting the future is perilous business indeed. However, as an industry analyst one of the sad facts of life that I have had to face is the reality that it is awfully hard to find clients willing to pay me to predict the past. This is an unfortunate situation, but it is, alas, the way of things.
Sun's acquisition of StorageTek in June is still in its early days, and by no means has the dust begun to settle. And yet, a few things are becoming clear.
Hurricane Katrina's devastation has hit the states of Louisiana, Mississippi and Alabama hard, and disaster relief efforts are at last underway. We all wish the best for our colleagues and their families in the affected areas.
Last time we talked about new, very large data files that are the result of improvements in some important technologies, and I raised a concern that we will have to contend with some new issues when it comes time to manage such large objects in the data center. More on this today.
My friends in Seattle tell me there are two types of weather in that city, and that there is an easy way to tell which is which: if you can't see Mount Rainier, it is raining; if you can see Mount Rainier, it's about to rain.
Blade servers are handy things, medium and large frames into which vendors can stuff numerous blades. Within such a server each blade performs either as an independent server or works in concert with the other blades to perform some particular task. Blades can be designed to perform a wide variety of disparate functions, including processing, storage and providing power.
"Greed is good," said Gordon Gecko in the movie "Wall Street." Unless, of course you get caught.
What was your introduction to e-mail? Mine occurred in late 1981 when we installed an experimental system (known as "x.mail") at Prime Computer in Massachusetts. In those days e-mail was mostly a "gee whiz" technology, and was as proprietary as could be (remember, this was pre-Internet).
In San Jose last week, Bell Micro, Fujitsu, LSI Logic and Supermicro got together to hold a coming out party for Serial Attach SCSI, or SAS, the newest incarnation of the SCSI device interface.