Ice balls help data center go green
- 14 October, 2010 01:10
Green isn't usually the first color that comes to mind when one visits the hot, dry desert climate of Phoenix, where temperatures recently topped 109 degrees. But that's exactly where I/O Data Center has opened a 180,000-square-foot commercial data center collocation facility that couples an energy-efficient design with the use of innovative green technologies. Those range from an unusual setup for its air handlers to its server-rack design.
And then there are the ice balls -- water-filled, dimpled plastic spheres a little larger than a softball -- that I/O is using to feed the air conditioning system. The Ice Ball Thermal Storage system from San Diego-based Cryogel may be -- literally -- the coolest technology I/O is using.
At I/O's Phoenix facility, the balls float in four huge tanks filled with a glycol solution that's chilled to 22 degrees. The solution is pumped from chillers that surround hundreds of balls, freezing them during the night when electricity rates are lower. During the day, the system pumps the glycol solution surrounding the ice balls through a heat exchanger, which provides cool air to the data center. That reduces the need to run the chillers during the day, when rates are higher.
The system doesn't save any energy -- it's a zero-sum game at best -- but it does save money. By shifting electricity usage from daytime to nighttime, I/O probably saves about $1,250 per kilowatt (kW) shifted, says Victor Ott, Cryogel's president. (I/O didn't have an estimate of savings from the system.)
Ott claims the technology is greener because, in order to keep up with demand, utilities tend to run older, dirtier generating plants more heavily during peak daytime hours, creating more greenhouse gasses. "They burn less fossil fuel at night to generate electricity than they do during the day," he says, "and it's cooler at night so utility generators run more efficiently."
That last claim, however, might be stretching it a bit, says Peter Gross, vice president and general manager of Hewlett-Packard Co.'s Critical Facilities Services unit. (HP Critical Facilities Services designs data centers but wasn't involved in the design or construction of the I/O facility in Phoenix.)
Given the extreme cooling needs of I/O's Phoenix data center, which is designed to consume up to 40 megawatts of power when all phases of the project are fully built and occupied, why choose the desert? "A lot of people have been moving to Phoenix as a data center market," says KC Mares, president and chief energy officer of Megawatt Consulting, which designs large data centers. There are some good reasons to build there, he says. "But I question that from a total energy use standpoint."
"The external temperature is irrelevant to what's going on inside," claims I/O Vice President Jason Ferrara. One benefit of the dry desert climate, for example, is that it lends itself to energy-efficient evaporative cooling technology.
But there are limits to what you can do with cooling technologies such as economizers and evaporative coolers, which rely on outside air, in an environment where temperatures can hit 120 degrees or more in the summer. At the end of the day, despite using such energy-saving technologies, Mares says, "you pay a lot for cooling infrastructure."
However, I/O customer BMC Software didn't choose to use I/O's Phoenix collocation facility because it was primarily interested in green technology. Mahendra Durai, vice president of IT infrastructure and global operations at Houston-based BMC, says he chose Phoenix for one very practical reason: In the wake of recent hurricanes, management wanted to move the company's primary business systems out of Houston.
"Phoenix is one of the few locations in the country that have minimal to any impact from natural disasters," Durai says. BMC's plan calls for certified Tier IV-class space (which I/O plans to offer in the facility) and plenty of room to expand beyond its current deployment. I/O Phoenix's green focus was desirable, he says, but only No. 4 on his list of priorities.
Thermal storage systems built using Cryogel's ice balls cost about $60 per ton-hour of cooling, with a return on investment within three to five years, Ott says. That may be too long for some businesses, but ROI is not the primary reason why most data centers decide against using the technology.
Instead, the big obstacle is the fact that the system requires giant containers -- either pressure vessels or atmospheric tanks -- to hold the ice balls in the correct type of solution. And that requires a lot of space, which is at a premium for most data centers.
At I/O, for example, the Cryogel Ice Ball Thermal Storage system requires four steel pressure vessels, each measuring 10 feet in diameter by 22 feet high. I/O had plenty of space to create a large room for the cooling equipment, but most data centers don't. The need for so much space "overrides questions of price, usually," Ott says.
If space is not a concern, Gross said that he's in favor of using thermal storage technology. "Ice storage is a terrific idea. It flattens the electricity demand curve and creates some savings," he says.
But there's another reason why the use of thermal storage systems in data centers is rare: concerns about reliability.
During the day, when the system circulates chilled water from the ice balls, the data center's chillers shut off. It takes anywhere from 30 minutes to an hour to restart chillers and bring them to full operation. If a thermal storage system failed, the data center would be without cooling during that interval. "You'd have to shut down the whole data center because it would overheat by the time the chillers came back online," Gross says.
However, Ott says that thermal storage systems have been in use in industrial buildings for more than 20 years and are proven technology.
Gross agrees. He says a properly designed ice-ball system is extremely reliable. "We did that for a data center in Los Angeles, and it was very successful," he says.
Recycling an entire building
I/O Phoenix's green cred extends beyond its cooling system to the 538,000-square-foot building itself. To house its new facility, the collocation service provider recycled a former water bottling plant. In Phase 1 of the renovation project, which was completed in June 2009, I/O developed 180,000 square feet of raised floor space fed by up to 26 megawatts of conditioned power. In Phase 2, the company is developing another 180,000 square feet that will offer 20MW of capacity; it expects to complete that project in December. I/O has also built 80,000 square feet of office space in the building.
"One element that was unique about the I/O Phoenix facility was their ability to take a building that was a water bottling plant with high ceilings, and leverage the existing investment to drive green benefits," says Mahendra Durai, vice president of IT infrastructure and global operations at BMC Software Inc., which leases 320kW of capacity at the facility. (I/O Phoenix's largest customer, which it declined to name, uses 5 megawatts -- about 25 per cent of the facility's total capacity -- to power its primary U.S. data center.)
When it comes to energy-efficient data centers and the adoption of green technologies, I/O's motto is the bigger the better. "Larger-scale data centers make economies of scale for green technologies possible," CEO George Slessman says.
One of the more innovative ways that I/O Phoenix saves energy is by adding more computer room air handlers (CRAH) than are needed for the size of the facility. The CRAHs use variable-speed fans, and by spreading the distribution of conditioned air across more air handlers, all of the fans can run more slowly.
The math may look squishy, but it works and here's why: Because power consumption of a fan is proportional to the cube of its speed, cutting fan speed in half reduces power consumption by a factor of eight, Gross says. So the energy savings from slowing down all of the fans more than makes up for the energy consumed by having more of them running.
I/O also takes advantage of the dry desert air by using an evaporative cooling system. "That's a big trend in cooling," says Gross. "Evaporative cooling is very energy efficient."
I/O also offers energy-efficient technologies to clients. Those include high-efficiency, sealed racks that route cold air directly into the rack from the raised floor space and into an air return space on the top of each row, completely separating the equipment from room air. Jason Ferrara, a vice president at I/O, says that the racks can cool loads of up to 32kW.
The facility also uses ultrasonic humidifiers, which I/O executives claim consume 93 per cent less energy and generate far less heat than traditional units that produce steam.
I/O plans to generate some of its own green power. The second phase of the building project calls for the installation of a combined heat and power system that will provide chilled water for data center cooling while generating electric power.
I/O also plans to install 4.5MW of photovoltaic panels on the roof. The energy generated will be used to reduce the consumption of power from the grid during the day, when peak rates are in effect.
A combined heat and power system "doesn't make a lot of sense in small installations, but in a data center with 40 or 50MW of [uninterruptible power supplies] installed it makes a lot of sense," I/O's Slessman says.
The photovoltaic panels, however, are more questionable. While I/O has the design ready to go, Ferrara says the company has been holding off on buying panels because prices have been dropping so fast. "We're waiting for prices to stabilize," he explains.
Green data center
But today's photovoltaic panels generate just 16 watts of peak power per square foot of space they occupy. To even make a dent in the 20MW power demands of the data center as currently built out in Phase 1, I/O Phoenix would need acres upon on acres of panels, says Gross.
As a source of supplemental power that's piped back into the grid, photovoltaic panels work fine. But as a continuous source of power they aren't reliable, even in sunny Phoenix. "You're better off to use other means. Fuel cells are more effective," Gross says. But if solar power is impractical, it does have one advantage: bragging rights that come with deploying a 4.5MW solar array. Solar is sexy, Gross admits. "It brings PR value to the data center," he says.
I/O's Phoenix data center has a wide range of other green initiatives in place. Slessman says I/O expects the facility to receive Leadership in Energy & Environmental Design (LEED) Silver certification by mid-2011, and the company is working on making the facility one of the few data centers to receive a Tier IV (99.995% availability) certification from The Uptime Institute.
I/O's ultimate goal for the Phoenix facility, says Slessman, is "to be the only commercially available collocation provider in North America that's Tier IV- and LEED-certified."
Robert L. Mitchell writes technology-focused features for Computerworld. You can follow Rob on Twitter at http://twitter.com/rmitch or subscribe to his RSS feed. His e-mail address is firstname.lastname@example.org.