Server Buying Guide: Server Strategy Checklist
- 25 October, 2011 09:45
When purchasing servers, understanding workloads is critical. But this is just one of the myriad of factors that need to be taken into account to implement the right server strategy. There is clearly a trend toward purchasing computing power at ever-higher densities. This is underpinned by a shift from individual servers to the creation, management and use of virtual systems over a network. There is also the need for more efficient power management. Following is a checklist of server strategies to meet different IT needs.
Windows server is the name of the specifically designed servers and operating systems from Microsoft. There are numerous versions of this operating system, and they come with various features, that can suit different businesses. A 64-bit Windows server is the Windows operating system running in full 64-bit mode on Intel or AMD x86 processors . Another example of a Microsoft product is the SQL server which is a relational model database server, specifically designed to work with Microsoft’s SQL language.
An important trend in computing, blade servers have a much more modular configuration to allow for ease of upgrades. Operating on the basis of minimal components, these servers are capable of impressive benchmarks while fitting in a smaller space than traditional “full” servers. At their base, a blade server has a processor, memory, I/O jacks and a basic operating system. Rather than running a full OS, such as Apache or IIS, these servers are designed as intermediate “computing” servers that deliver data rapidly and efficiently. With their modular configuration, IT departments are able to make substantial savings.
X86 servers with eight sockets and above
These servers are based on x86 technology (Intel Xeon or AMD Opteron) and offer eight or more processor sockets. Servers with higher socket counts can deliver greater vertical scaling for applications and virtualization deployments that demand additional processor or memory performance. Most x86 server shipments are designed in one and two socket versions; even the market for four socket servers represent only a small percentage of sales. Therefore the market for eight-socket servers and larger represent only a tiny fraction of the market.
Gartner analyst, Carl Claunch, said there will always be classes of applications that require a larger physical server design. Most mature vertical applications are deployed on RISC, Itanium or mainframe servers. “However, some high-volume, high-update rate database servers are increasingly moving to Windows, Solaris x86 and Linux,” Claunch said. “The growing maturity of those x86 operating systems and the desire for standard approaches across the data centre, are enough to justify a small, but tangible, market for larger-scale x86 servers.” Although x86 servers with more than four sockets appear to be excellent consolidation servers, Claunch said organisations should verify that the larger CPU count does not create software licensing penalties. Gartner advises using a smaller number of larger-capacity servers to reduce administration overhead and streamline the hardware footprint in the data centre. The growing focus of mainstream x86 vendors, such as IBM and HP, will be on introducing volume economics and aligning with users deploying more small to midsize machines.
Mission critical workloads on Linux
This technology includes all the operations and critical foundations that enable an organisation to function 24/7, including all necessary ecosystems. Linux is essential for complex, mission-critical workloads including effective support for databases, recovery, disaster tolerance, system management, service-level agreements and dynamic resource allocation. Gartner says enterprises should expand the use of Linux to take advantage of commodity hardware and the associated cost savings.
Linux on 4 to 16 socket servers
This technology involves the ability of the Linux OS to support servers capable of symmetric multiprocessing (SMP) using as many as 16-socket servers. Gartner says Linux is no longer untested in production environments on servers using as many as 64 processors. Today many vendors including HP, IBM, Oracle, Fujitsu, NEC and Unisys, are shipping Linux based systems for business critical applications.
Linux on System Z
Linux on System z is the set of Linux distributions that has been ported to run on the IBM mainframe z/Architecture and has been supported for more than 10 years. IBM has had its greatest success with Linux on System z where the software pricing model is based on the number of cores, regardless of platforms. Ratios of 20 or more servers consolidated per Integrated Facility for Linux (IFL) makes a Linux on System z solution compelling. Gartner says mainframe users should examine the use of Linux on System z for most Linux applications. Linux on System z enables major consolidation of workloads that previously required separate servers, with associated hardware, software and environmental savings in areas where large number of Linux systems can be combined.
Over the page: High performance computing clusters, Linux on RISC, Data centre container solutions
High performance computing clusters: Windows
HPC clusters use multiple computers owned by one organisation to accomplish intensive computer tasks including using Windows Server products, such as Windows HPC Server 2008 R2, as the operating system. Gartner says cluster computing is the most cost-effective, mainstream approach for requirements in which there is software to scale in parallel processing across multiple machines, or where the code can be written in-house, based on a suitable algorithm. Gartner analyst, Carl Claunch, said clusters are more fluidly scalable than fixed supercomputers, improving an organisation’s ability to react to changing needs. He said those that need more extreme capacity might need to move to grid computing to harness additional computing power.
Linux on RISC
Linux on reduced instruction set computer (RISC) refers to the adoption and broad acceptance of Linux on such RISC architectures as IBM’s Power and Oracle’s SPARC. Gartner says this server strategy works best when performance gains are available on RISC, but are not available from x86.
This technology can be important for consolidation on large servers with large memory configurations and in mixed operating system environments is required, or when Linux applications are available, but not for Unix. Gartner analyst, George Weiss, said RISC on Linux can provide additional benefits that x86 servers do not. “When supported by a software stack such as Websphere, Linux on RISC can be considered a viable niche,” he said.
Data centre container solutions
A data centre container is a shipping container set up to accommodate IT equipment. The basic equipment that most of these containers are designed to support includes servers, storage and networking gear. Although scalability and speed of deployment (fewer than 12 weeks) are the main advantages, a container solution requires appropriate site selection to ensure adequate physical security.
Gartner analyst, David Cappuccio, said container solutions can provide an alternative to the capital needed for a bricks and mortar data centre. He said a typical container can cost approximately $2 million to $4.5 million, fully configured. Because they are designed to the user’s technical specifications, Cappuccio said data centre containers can provide significant levels of computing power that can be delivered in eight to 12 weeks, instead of the 18 to 24 months it would take to build a comparable data centre.