In another article we saw that making any changes to the air-conditioning, the electricity supply or the cooling arrangements were not trivial undertakings. Datacentre infrastructure changes are significant exercises. What about making changes to the IT kit inside the datacentre?
Let's categorise it into server boxes, storage boxes and network boxes. There are two approaches. First we can find out under-utilised boxes and deal with the under use. Secondly we can make much better use of apparently well-utilised boxes and thus eject some now not-needed ones. That basically means consolidating workloads and/or virtualising them.
Server proliferation has been near-endemic in datacentres as the inability of Windows to multi-task well and the need for more storage have both encouraged a buy-another-server response to dealing with application slowdown or the introduction of a fresh application.
One hardware response has been to introduce blade servers: have many server blades mounted in one rack instead of in individual rack shelf units. This can increase the server density of a rack three or four times and thus free up space. But it introduces fresh problems of its own. The power needs of the racks increase and so do their cooling needs, which increases the power need yet more.
So blading servers can help solve a datacentre space problem but it adds to datacentre power and cooling issues. It has taken another way of dealing with servers, a software virtualisation route, to enable a relatively easy way to cut server proliferation down to size.
This is, of course, VMware, and its rise has shown how very, very bad at multi-tasking is Windows. A single VMware server can replace five, ten or even more individual Windows servers, yet it is basically the same hardware now working much more efficiently. Virtualising servers is proving to be an efficient way to reduce server space take-up in datacentres without adding to power and cooling needs.
Unix and Linux have their virtualisation products too; such as XEN Source and Solaris containers. With these you can also consolidate server hardware.
Buy VMware and you can apparently eject quite a high proportion of your Windows server boxes running just one application. Customers have reported 60-80 percent utilisation rates for x86 servers, up from today's 5-15 percent. That indicates that from every set of ten servers operating at 10 percent utilisation you could throw eight away and have two operating at 50 percent utilisation each - a happy thought in terms of reduced power and cooling cost.
One problem though: each of the eight discarded servers had its own direct-attached storage. What do you do about that?
If you consolidate the storage into a Fibre Channel storage area network (SAN), then you need to add host bus adapters (HBAs) for each physical server, preferably with virtualised features so they work with the VMware virtual machines, Fibre Channel cabling, SAN fabric switch boxes and a set of Fibre Channel-connected drive arrays plus SAN management software.
It adds up in terms of box and device power cost and incremental admin expense and skills.
If you use an iSCSI SAN or a network-attached storage (NAS) approach then you can use Ethernet as the storage link and avoid Fibre Channel-related expense and skills.
Consolidated storage is a natural extension of the virtual server approach.