How to build a data center in 6 months for US$800,000

Home healthcare firm built data center that saves on space, power, cooling and IT effort

For years, Robert Wakefield and Dameon Rustin lived with the problems of keeping Snelling Staffing Services' old, poorly designed data center up and running. Not only were the intricate cable runs and varied server makes and models difficult to keep straight, but the building itself tended to compound their management headaches.

"Our 15-ton air-conditioning unit was water-cooled, but the building [management] didn't clean the cooling tower very often," says Wakefield, vice president of IT at Snelling Staffing and Intrepid USA, a home healthcare firm also owned by Snelling's parent firm, Patriarch Partners. "Muck would get in, clog up our strainers and shut down the AC unit to our data center. That was a big problem."

In addition, the building owners would not give Snelling the OK to put in a diesel backup generator to power the data center. "Let's just say they weren't very helpful," says Wakefield, who spoke about his data center project at the recent Network World IT Roadmap Conference and Expo in Dallas.

Things began to change quickly once Patriarch bought up Intrepid in 2006. Wakefield and Rustin, Snelling's director of technology, were charged with building a brand-new data center that would not only solve the current Snelling problems, but also house Intrepid's data center and be ready to support any future growth.

"We had to build expandability into it because Patriarch is a private investment firm, and their goal is to buy more companies and roll them in," Wakefield says. "We were told to give ourselves about 100% growth room."

The downside? They needed to do all that with a budget of US$800,000 and a window of only six months. "It was a challenge," Wakefield says.

But it was a challenge they met head-on. Today, Snelling and Intrepid's new 1,100 square foot data center in Dallas efficiently houses a variety of equipment, including:

  • A total of 137 servers (45 for Intrepid and 92 for Snelling), 37 of which are new dual-core, dual-AMD Opteron processor-based Sun Fire X-series Unix servers.
  • Three EMC storage systems, including an EMC CX400, a CX3-20 iSCSI system and an old SC4500, as well as a Quantum tape library.
  • A variety of networking components, including shared virus scanners and Web surfing control appliances.
  • A Liebert 100kVA uninterruptible power supply (UPS).
  • Two Emerson 10-ton and one Emerson 15-ton glycol-based AC units.

And even with all of that, Wakefield says he still has room to add nine more server racks.

Getting there

Wakefield and Rustin first visited several data centers to get an idea of what could and could not be done. They also looked at a number of different locations before deciding in January on the Dallas building. Then, the real planning began.

"Once we had the dimensions, everything else came from that," Wakefield says. He and Ruston drew up 10 different floor plans and began calculating how many servers they'd need, and how much cabinet space. At that point, requirements began to fall into place. "High-density became a requirement; virtualization became a requirement," he says.

Although the new data center is only 150 square feet larger than the old one, it needed to support more than 40 additional servers, plus provide room for growth. Wakefield considered going the blade server route to save space, but soon learned they were prohibitively expensive.

"Blades were pretty high cost-wise, and we had bought some of the Sun X-series boxes in the past," he says. "They are AMD-based, so they use less energy and put out less heat. And they're dual-core, dual-processor with about 8GB of RAM, so we could set up [virtual machines] on a good chunk of them, and that saved us a lot of space too."

Join the newsletter!

Error: Please check your email address.

More about AMDAMPCiscoCPI HoldingsEMC CorporationEmersonHISLiebertPLUSQuantumVIAWakefield

Show Comments

Market Place