Transforming the data center from hell

Getting the ultimate performance out of deficient facilities

College faced dying servers from blackouts

As the College of Southern Nevada grew to its current size of about 40,000 students and 500 full-time faculty members, its IT operations expanded randomly in response to specific departmental requirements, leading to the operation of five separate facilities at three different campuses located as much as 30 miles apart.

"They were really little more than large network closets with a UPS and some server racks," says Josh Feudi, interim CIO. "We were trying to add more services for our students, staff and administration and began running into an increasing number of issues. Here in Nevada we had cooling problems, humidity issues, and then we started experiencing local power outages." Units, including servers, "were dying on us," he adds.

On days with high winds, the area began experiencing rolling brownouts that could lead to server crashes, Feudi says. As temperatures rose from spring to fall, the college's IT department was forced to turn off selected services, including limiting the admissions office's ability to access student data. The college was attempting to cool the five rooms primarily by using the building's general central air conditioning.

"Temperatures were getting into the hundreds," Feudi says. "We knew these data rooms had outgrown their usefulness and determined that centralizing in one location would have less of an economic impact than trying to upgrade the individual rooms."

Working with Hewlett-Packard and American Power Conversion, the college created a single consolidated data center that takes advantage of blade servers, virtualization software and specialized heat-containment and cooling modules.

The college had more than 250 physical servers in its five data rooms. Servers that were still under maintenance contracts -- about 150 -- were moved to the new central data center, and the remaining systems were consolidated using VMware virtualization software on three HP server blades placed in a single cabinet. As the older servers still in operation reach end of life, Feudi plans to further consolidate on server blades.

To address heat and cooling issues, the college began using APC's InfraStruXure hot aisle containment platforms. The platforms place two rows of servers back to back, which forces hot air into a middle row that is contained using a ceiling and doors on both ends. In-row cooling equipment inside the platform allows Feudi to directly address the contained hot air instead of attempting to cool all the heat that would be released throughout the data center in a conventional design.

The project was completed with no addition to IT staff, has significantly reduced staff travel time to the formerly far-flung locations, and has increased service availability from a low point of 79 per cent to 99.99 per cent today, he says. The new data center has allowed the college to expand services, enabling students to enroll in online courses in particular.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about American Power ConversionAPC by Schneider ElectricEmersonEmerson Network PowerHewlett-Packard AustraliaHPLiebertNortonRoseVMware Australia

Show Comments