Servers: What’s very old is now new again

Back in the 1970s, it was easy. With dumb terminals the only way of accessing computer systems, the server was the computer.

As information systems became particularly well entrenched within corporate strategising, however, the server’s role has progressively become both more complicated and more essential to the functioning of modern businesses.

Just ask Adrian Yarrow, manager of corporate systems administration at Central Queensland University (CQU), which is based at Rockhampton and has nine other campuses throughout Australia and internationally. In 10 years working at the 18,000-student university, he’s watched the server infrastructure grow dramatically, first to support growing automation of the business and later to provide the increasingly sophisticated IT services being rolled out to students and staff.

A decade ago, CQU was running a single Digital VAX VMS server which was shared amongst students and staff to provide computing power and access to the then-fledgling Internet. HP3000 servers were also introduced to handle the university’s academic, financial management and library catalogue systems.

Considered cutting-edge for their time, those servers were soon complemented and then replaced by Digital AlphaServers running OSF/1. By the mid 1990s, Windows NT began to join the mix, as did Sun Solaris, Digital (later Compaq) Tru64 Unix, and Linux.

“The last 10 years has seen an increasing reliance on administrative functions being computerised,” Yarrow said. “Conceptually, people didn’t think beyond those environments. It used to be that the server was your whole computing experience, but now servers are dedicated to a single task and end users don’t necessarily know or care that the servers exist. They’re more interested in the services that are provided; the servers are supposed to be hidden behind the curtains.”

They’d have to be big curtains. These days, CQU’s student base is serviced by a core of around 110 servers, half of which are Windows 2000 systems and half Tru64, Solaris and Linux. The servers share a 6Tbyte storage array, which services a broad range of applications including PeopleSoft, which initially ran on three HP9000 servers but was recently moved onto a cluster of around 20 Windows 2000 and Compaq Tru64 Unix servers.

While the increasing range of applications has undeniably provided both administrative and functional improvements, Yarrow says more is not necessarily better. “We have a lot of hardware that’s really underutilised, where it could be much better utilised if we could share physical resources between applications more easily,” he says. “Some 90 per cent of the infrastructure resources are not being tapped, and that’s a big waste.”

CQU’s experiences typify the rapid evolution of the server over the past quarter-century, particularly in the time since standardisation on Web clients shifted the focus of client/server to the server side. Now the undeniable workhorses of the information economy, the once-humble server — which not too long ago would simply have been the fastest desktop available, stuck in a closet to provide file and print management — has been redesigned, upgraded, clustered, consolidated, modularised and commoditised.

Along the way, it’s picked up many features that made the original Big Iron servers such timeless classics. For example, the shift to e-commerce — which created a business imperative for 24x7 availability — pushed the industry to offer redundant-everything designs. Hot-swappable fans, power supplies, hard drives and even processors were introduced so that problems could be fixed on the fly.

Mainframes were the inspiration for strict data storage management, tight user access control, high-granularity internal security, and high-speed (for the time) interconnections. Advanced resource management allowed for virtualisation of computing resources so that applications could run — and crash — independently of each other.

Although proprietary Unix vendors had the early lead when it came to building industrial-strength systems, the server market’s biggest shift has been the rapid technological progression of commoditised Intel processor-based systems, which liberated companies from reliance on a particular processor architecture and instead focused on the operating system as the differentiator.

That freed hardware makers to build systems the way they wanted them, and worry less about carrying the technical burden of building both operating systems and hardware optimised to run them. The three main enterprise server operating systems — Windows, Sun Solaris and Linux — continue to make up the lion’s share of new server shipments, with Linux leading the pack as its open design and flexibility convert an increasing number of once-sceptical companies.

The move away from proprietary servers will continue to typify servers’ growth into the future, with many companies shifting their investments away from massive vertically scalable systems to horizontal clusters of smaller systems — and then providing virtual hosted services from those servers on an as-needed basis. This approach allows server investments to more accurately reflect the staggered growth of most businesses, reducing the capital investment necessary to expand IT systems.

Customers’ increasing demand for manageable server growth has recently targeted with the introduction of modular, chassis-based blade servers that can be expanded one interface card at a time. Those cards, which typically incorporate one to four processors as well as RAM and a hard drive, are pulling in the reins by concentrating server power while reducing the complexity of management.

We are, in a sense, returning to the days of big iron — but with the flexibility and benefits of open systems thrown into the mix. And this is just as it should be, according to Matthew Boon, Asia-Pacific vice president of hardware and systems with Gartner.

“Organisations have faced the challenge of figuring out how they can manage so many systems,” Boon said. “While they’ve afforded benefits from a pure application point of view, they’re now struggling to manage those systems. While it might appear that we’re regressing back to this unwieldy sort of environment, today’s larger systems can be partitioned into many virtual systems. Even within consolidation we’re seeing a move towards systems which are much more versatile, that let you add and remove components much more easily.

"Once they’re in server farms they become much more cohesive, large systems.”

Join the newsletter!

Error: Please check your email address.

More about Central Queensland UniversityCentral Queensland UniversityCompaqGartnerIntelPeopleSoftScalable Systems

Show Comments

Market Place