Planning the new data center means more than plotting out server consolidation and virtualization. It also means knowing the effect these have on overall data center design. Bob Doherty, a 30-year IT veteran, studies such issues for the Data Center Institute, a think tank run by AFCOM, an association for data center professionals. In an interview with Beth Schultz, Doherty describes what he calls the new data center's "physiology" -- and explains why conditions might deteriorate quickly.
What do you mean by 'data center physiology'?
Because the data center is evolving so fast, with ever-increasing demands for uptime and the introduction of new technologies such as blade servers and grid computing, I liken the data center to a living, breathing and growing entity. So when I talk about physiology, I mean those ingredients -- the nutrients -- that give the data center life. That energy source that fosters healthy performance. Unfortunately, I see all too many data centers in decline -- they are not healthy.
What is causing this illness - new server consolidation technologies and the like?
For the past 25 years, data center managers have been able to host most any type of technology within the walls of their data centers. We think and calculate in increments of 2-foot-square blocks, as this is the size of the data center's raised floor panel. Equipment will usually conform to this mental grid we see as we plan the addition of more equipment and technology. Server consolidation has allowed us to free up valuable floor space in the data center. Glen Goss, of GJ Associates in Stow, Mass., hosted a conference a few months back at which I heard a report on the cost of a Tier-1 data center to be a staggering US$1,060 per square foot. So consolidation looks to be a real benefit for us. That whopping US$1,060 per square foot sounds like a premium price to me. So, yes, miniaturization is a good thing -- at first glance.
Just at first glance? Why?
Can you remember not so many years ago running applications on an Intel 486? Conservatively, that hardware required 10U of space in a 19-inch computer rack with power consumption at about 6 amps for each server. So in a 42U-high, 19-inch rack we were able to host four computers. Power consumption was around 2kW per rack. We also are using an area half the size of the space allotted to our computer equipment for service and utilities to support the data center's infrastructure needs. Miniaturization looked like a dream come true because we started putting 10 servers in one rack - then 15. Things were looking good until we realized we were not able to dissipate the heat being generated in a rack with 15 or more servers.
How about all those cables we have in one rack now? It's next to impossible to trace any of the cables, and we are getting hamstrung when trying to service equipment in these racks. But perseverance and tweaking things for a couple of years lowered our temperatures. Variable speed computer fans helped, and then there is acceptance of the hot aisle, cold aisle concept. Cable management was not as successful - but we did make some headway. Yet the physical size of servers continues to evolve dramatically. My research suggests 48 blade servers will fit into one 19-inch computer rack.
In searching through HP and IBM hardware specifications, I have found 30kW requirements per rack. And I saw projections for 50 to 100kW per rack promised for this year. When I asked one OEM why vendors are doing this, he said, 'Because we can.'
So what we're gaining in space from smaller servers we're losing to environmentals?
Emphatically yes. Are you building a Tier-1 data center today for US$1,060 per square foot? How are you going to cool 30kW to 100kW per square foot? How are you going to provide power distribution? I'm not aware there are solutions to answer these questions. What does your vendor project as the needed utility space to support the new data center? I'm sure the footprint projected for the actual computer room is a much smaller area than what we are familiar with. I suggest there may be an increase of as much as fourfold in the need for service and utility space to support this shrinking footprint.
Our data center today cannot possibly accommodate 30kW per rack -- never mind any projection as crazy as 100kW per rack. The data center's distribution systems cannot satisfy this power or air conditioning load. I suggest the computer equipment manufacturers need to partner up with the appropriate engineering disciplines. The level of hardware density being advertised cannot be supported within the data center infrastructure we know today.
So what's your advice?
I don't know what design changes are needed. But I wish to heighten a concern for the data center's physiology by engaging in conversations with data center design engineers, power engineers and electrical contractors; cable installation contractors; environmental monitoring vendors; rack manufacturers; AC engineers; and computer equipment makers.