Mother Nature has wrought havoc in the Gulf and many of us were once again faced personally with worries over friends and family in harm's way and professionally with concerns about organizations facing uncertainty over their ability to continue or even recover their businesses. In a timely coincidence, I happened to be attending a disaster recovery (DR) conference on the west coast, and, appropriately, Hurricane Ike occupied center stage for much of the discussion. A number of would-be participants never made it to the conference as they were attending to more pressing matters back home.
Stories by Jim Damoulakis
Evidence is mounting of a growing cynicism regarding green initiatives within the IT infrastructure space. We may be reaching a point where vendor hype has hit a saturation point and beginning to meet with customer resistance. While there is a genuine concern about data center power consumption, particularly with regard to accommodating increasingly dense technology footprints, the larger concern for most, particularly in the current climate, is controlling costs.
Of the assortment of technologies swarming around the storage and data protection space these days, one that can be counted on to garner both lots of interest and lots of questions among users is deduplication. The interest is understandable since the potential value proposition, in terms of reduction of required storage capacity, is at least conceptually on a par with the ROI of server virtualization. The win-win proposition of providing better services (e.g. disk-based recovery) while reducing costs is undeniably attractive.
I have a nightmare vision of storage administrators becoming clones of the mail carrier Newman from the TV sitcom Seinfeld, who once bemoaned the endless pressures of his job, crying, "The mail! It just keeps on coming and coming!"
We've all experienced it: that sense of frustration whenever the disk drive LED on your laptop turns solid green for a seemingly interminable period. While enduring one such interruption recently, my thoughts turned longingly to solid state drives and their emergence as a force to be reckoned with both at the low end and high. Several recent news items underscore this fact.
In IT, we tend to like redundancy.
Among the multitude of data protection challenges facing IT organizations, arguably the least favorite for IT managers is dealing with laptop systems. Each week we read more horror stories about lost notebook computers and potentially compromised data as organizations attempt to grapple with what is literally a moving target.
Every Storage Networking World conference tends to serve as a gauge on the mood of the market and provides indications of trends and directions in the storage industry, and last week's in Orlando, to me at least, reflected a storage market that is transitioning to a mature phase, where new developments are likely to be more incremental - evolutionary rather than revolutionary.
It's been said that a server virtualization project is actually an infrastructure redesign project, and certainly in areas related to storage and data protection, the impact can be dramatic both in terms of the volume of data and in the operations to support and protect it. Backup is a particular case in point.
With server virtualization being all the rage, it can be very tempting to jump into it with a "build it and they will come" mentality. This could be risky, as recent surveys have indicated that a sizeable number of adopters aren't able determine if their projects were successful. We shouldn't forget that a virtualization project is no different than any other large scale IT undertaking: it takes careful planning, clearly defined objectives, and reliable execution in order to realize the benefits. Here are a few items to help avoid some common pitfalls:
With increasing frequency these days, articles are being published about the coming economic downturn and its effect on corporate IT. In one sense, IT organizations have been preparing for a downturn for some time, given the considerable pressure over the past several years to better curb the rate of IT spending. Consolidation efforts have become commonplace: data center consolidation initiatives are occurring in most large organizations, and server consolidation through virtualization and blade technologies seems to top almost everyone's to-do list. Green initiatives within data centers represent another dimension of the ongoing effort to drive efficiency.
Last week's flurry of reports stemming from a Wall Street Journal article about Google's unannounced plans to offer an online storage service represented the latest in a long-running series of rumors on the subject.
A recent technology discussion among some colleagues unexpectedly turned to focusing on the operational challenges introduced by new technologies like server virtualization, dynamic (a.k.a. thin) provisioning, data de-duplication and grid-based clustered file servers. Although each of these technologies is at a different phase in market acceptance, each is having an influence on the future planning and architecting of IT infrastructure. And while there is often a groundswell push for adoption, many organizations haven't given appropriate consideration to the organizational changes that will invariably accompany their introduction.
Despite their increasing complexity in terms of both size and functionality storage systems have achieved an impressive level of reliability. This is particularly noteworthy given the fact that they are engineered around electro-mechanical devices (i.e. disks) that are among the components most prone to failure within the datacenter. The safeguards and redundancies designed into modern storage systems routinely handle most device failures as a regular matter of course with little or no impact to the overall operation.
Somewhere near the top of the list of activities that storage people like doing least is data migration. Slow, time-consuming, and often scheduled at convenient times such as 3 a.m. Sunday morning, it has to rank up there with tasks like disaster recovery testing and SAN reconfiguration. To make matters worse, the process often seems to be a prime candidate for Murphy's Law, often exceeding scheduled windows or needing to be rolled back and rescheduled due to unforeseen problems.
- The job "Automation Test Analyst Guidewire " is now Expired QLD
- Network Engineer (RUN-BAU) VIC
- Project Control Analyst SA
- Security Business Analyst VIC
- Senior Network Engineer ACT
- Network Engineer NSW
- Full Stack Developer/Senior Analyst QLD
- Senior MySQL Database Administrator NSW
- Technical Specialist ? VMware VIC
- Implementation Supervisor WA