Socket and see

Utility computing is pitched as an enterprise computing model that cuts operating costs as well as software and licensing. Utility computing is generally perceived as using a shared application within a shared (public or private) infrastructure; some companies dipping their toes into a form of utility computing today are using pay-as-you-go computing models for scalability and ease of management.

To Paul Strassmann, the former top IT manager at the Pentagon, Utility computing -- in which users buy IT resources such as storage, compute cycles or application services as though they were electricity -- is nothing less than "the future of computing".

"The idea that each organization must invest in its own infrastructure, custom applications, unique data definitions, and location-specific software and hardware is economically not sustainable. Only medieval guilds are comparable to the structure of existing IT organizations."

David Moschella, global research director at CSC Research and Advisory Services in the US, says the utility model is already here. "It's what most consumers and small businesses already do as they use and sometimes even pay for Web-based or mobile services," he says.

Large businesses will follow suit, he says. "Too many IT organizations are currently drowning in low-value IT work," Moschella says. "Only by using more-standardized, [utility-style] services can companies redeploy their resources to focus on things that can deliver unique business value."

And running computers isn't one of those things, Harvard Business School professor Warren McFarlan says. "Data centres a decade from now will be gone. You'll be using a combination of application service providers and third parties for security and backup and so forth," he says. And, he says, an entire industry will spring up that's focused on very high reliability, redundancy and security.

In a survey of utility computing adoption trends in Australia, IDC research found that, in late 2004, only 9 percent of organizations in Australia were seriously considering the concept. Analysts predict widespread adoption within a decade, although there are examples of companies using software delivered as a service today.

IDC utility computing and outsourcing analyst Aprajita Sharma stopped short of comparing utility computing to a form of outsourcing, saying it is a foundation with which to make the business more agile.

"The driver away from asset-based information technology is not the one-off cost savings." While utility computing is one model that lets companies invest in products and services as opposed to back-office activities, it is not an extension of outsourcing but the foundation for making business strategy more agile, Sharma said.

"Using software delivered as a service is where we see the market heading in six to seven years because companies can use a provider's infrastructure and see the cost savings over a short period of time. IT is still seen as a cost centre in companies, and the non-IT inclined see utility computing as removing the cost centre. However, considering utility computing should not be just about cost savings.

"Eventually we will see a service provider, like IBM, HP or EDS setting up a data centre with the infrastructure to act as either a private or public utility - they may even have a tie-up with a network service provider and I see an ecosystem where applications such as SAP or Oracle do not interact with the end user, but rather just sell directly into a service provider," Sharma said.

The application service provider "alternative" has become more affordable due to the availability and cost of bandwidth. Marty Gauvin, CEO of application hosting provider Hostworks said companies approach it to provide software that is important to the business, but of a secondary nature to the business in order to reduce the complexity of their own IT shop.

"Companies are primarily using application service providers as a way of reducing complexity. For instance, if you are an organization deciding to go towards using open source on Linux and want to run an application on windows, rather than doubling up on in-house IT skills you [have a service provider deliver it]. You can gain capability from a package without moving away from the core IT strategy," Gauvin said.

"By adopting the "software as a service (SaaS)" path it also demonstrates, in an enterprise environment, what really matters in your IT environment." According to Gauvin, the mass adoption and use of broadband has made a tremendous difference to the application service provider market. Gauvin said in the past the hosting cost was more than offset by the increased telecommunications costs, and now it is possible for businesses to get the bandwidth they need from a service provider "as ADSL and SDL is cheap and the idea of having a gigabit WAN connection to the other side of the country is astounding."

One example software as a service in the enterprise is its use in the Commonwealth Bank of Australia. The bank has a high level of contract labour and for its sourcing processes recently adopted software designed by Cyberlynx with hosting by Hostworks.

This new system of sourcing has an estimated saving of $8 million over three years. Gauvin said the bank specifically wanted to get software out of a plug in the wall that just suited its needs and which enabled a quicker time to market and a better relationship with contractors, Gauvin said.

A dream deferred

Paradigm shifts were easier before the bubble burst. Serious change costs serious money, and few IT organizations have gobs of green stuff to throw around anymore. So it's no surprise that utility computing -- hailed as the biggest paradigm shift since the first disk drive spun up -- has stalled. It doesn't help that the marketing geniuses who came up with the concept still can't agree on what it means. There are three basic definitions.

Utility as an on-demand computing resource: Also called "adaptive computing", depending on which analyst or vendor you talk to, on-demand computing allows companies to outsource significant portions of their data centres, and even ratchet resource requirements up and down quickly and easily depending on need. For those of us with grey whiskers in our beards, it's easiest to think of it as very smart, flexible hosting.

Utility as the organic data centre: This is the pinnacle of utility computing and refers to a new architecture that employs a variety of technologies to enable data centres to respond immediately to business needs, market changes, or customer requirements.

Data centres not only respond immediately, but nearly effortlessly, requiring significantly less IT staff than traditional data centre designs.

Utility as grid computing, virtualization, or smart clusters: This is just one example of a specific technology designed to enable the above definitions. Other technologies that will play here include utility storage, private high-speed WAN connections, local CPU interconnect technologies (such as InfiniBand), blade servers, and more.

These three descriptions are different enough to seem unrelated, but in fact they're dependent on each other for survival. Should utility computing ever live up to its name -- a resource you plug in to, as you would the electric power grid -- then that resource needs to be distributed, self-managing, and virtualized. Whether that grand vision will ever be realized is an open question, but at least some of the enabling technologies are already here or on the horizon.

The on-demand adaptive buzzword enterprise

The on-demand version of utility computing is the one closest to fruition. Vendors such as Dell, EMC, Hewlett-Packard, IBM, and Sun have been selling it for some time. This year Sun Microsystems has been the noisiest of the bunch, recently announcing that it wants to be the electric company of off-site computing cycles.

"Sun has decided to take utility to a whole new level," says Aisling MacRunnels, Sun Microsystems' vice president of marketing for utility computing. "We're building the Sun Grid to be easy to use, scalable, and governed by metered pricing. We're also incorporating a multi-tenant model that allows us to provide a different scale of economy by pushing spare CPU cycles to other customers."

The Sun Grid is comprised of several regional computing centres (six throughout the US, so far), each running an increasing number of computing clusters based on Sun's N1 Grid technology.

Sun wants to cut through utility computing's confusion and attract customers. Yet Sun also has a few chasms to cross, which is why the Sun Grid still isn't commercially available. "The goal once we're out there is to be able to give additional CPU resources to our customers immediately," MacRunnels says. "That's a big challenge for us. Right now we know we're not yet commercially viable, which is why we're only chasing specific application markets. We need to walk before we run."

Charles King, president and principal analyst at market research firm Pund-It, has a rather cynical take on Sun's offering. "What Sun is selling isn't really new; it's been offered by IBM and HP for several years. Sun has simply gotten more specific and done what it does very well, which is simplify something highly complex with a great marketing slogan."

Most analysts agree that IBM leads the field in offering utility-based services to clients of its On Demand and Global Services departments. "Other companies are wrapped up in the whole notion of access to compute power," states Dave Turek, vice president of deep computing at IBM. "But computing power comes in many forms, including not just grids and virtualization, but also more standard forms of hosting. It depends entirely on customer needs, and these change quickly."

According to Turek, IBM's On Demand service is all about providing solutions tailored to individual requirements. "Utility should be a base kind of service just like water or electricity. But where those services are rigid, On Demand's intrinsic value needs to be wrapped up in customer need, and that means exceptional flexibility."

HP agrees, having coined its service name as the Adaptive Enterprise, but touting the same organic message requiring IT infrastructure that responds to changing business requirements. "We've made an announcement on our grid strategy," says Russ Daniels, vice president and CTO of HP's software and adaptive enterprise unit, "but that's really a specialized application. We feel utility computing refers to technology applied to business process." Today, HP has customers accessing its resources for increased computing power similar to the Sun Grid, but like IBM, it also places consulting, traditional hosting, and even several on-site products under its utility umbrella.

The attractions of utility

Most customers understand the benefits of flexible hosting. But what of the organic, virtualized, self-managing data centre -- assuming it can be achieved? Forrester sees the grand concept of utility computing as a solution for three key problems: wasteful technology purchases, unnecessarily labourious IT processes, and rigid IT capabilities that by definition paralyze business processes. Nail those three, and you can get a lot more out of its existing resources. The initial investment in provisioning and virtualization eventually justifies itself by reducing capital expenditures, slowing the growth of IT staff, and providing the business with new agility.

Ultimately, a company could run multiple workloads on fewer machines in fewer data centres, and accomplish this through the use of multisystem architectures such as blade-based systems, clusters, or grids. That's only one example, of course. Combining that hardware with a reduced number of platform architectures means faster processing, faster reaction time, and less staff training. Such consolidation isn't a plug-and-play decision, however, but a gradual process that involves evaluating every technology purchase.

"This is really customer-dependent," says Ken Knotts, senior technologist at ClearCube, a blade workstation and grid computing vendor. ClearCube is an excellent example of a utility-oriented product offering, because the company makes a blade-based workstation system. By pulling workstations back onto a central blade backplane, ClearCube's utility-style blade system is in a position to meet a variety of challenges that traditional workstations can't easily handle.

"Because we can reprovision a blade from scratch, drop a user's personal data and settings on it within 10 minutes or less, we're in a position to save customers loads of money on large IT support staffs," Knotts says. The company can also extend its functionality across the WAN. One customer uses the ClearCube system on a LAN during the day for US developers and then opens those workstations at night to developers in India.

Getting on the grid

Grids provide a perfect entry into the utility-computing space because they follow the golden rule of offering more for less: namely, the power of a supercomputer for the price of a few workstations. They offer unheard of flexibility and they don't require you to rip out existing infrastructure. And these benefits extend to outsourcers as well as those running grids in-house.

Don Becker, CTO of Penguin Computing, a manufacturer of Linux-based grid solutions, offers a succinct definition of grid computing. "A grid cluster is a collection of independent machines connected together by a private network with a specific software layer on top," Becker says. "This software layer has to make the entire cluster look like a single computing resource."

A master node controls a varying number of such processing nodes with the ultimate goal being that, to the operator of the master node, the entire ensemble looks like a single processing unit. The most common example of a grid in action is that of the suddenly stressed Web server.

"E-tailers, for example," Pund-It's King says, "have 30 percent of their business happening between January and October and 70 percent occurring between October and December because of holiday sales." If the e-tailer is running a grid, the master node administrator can simply spawn off several more virtualizations of Apache in early October, and thus handle the additional traffic. Even better, he can do it all in a few minutes or even schedule it to happen automatically based on a performance policy.

Although the standards for hardware grid management are evolving rapidly, they're still missing a critical component. "One of the big challenges in running software in any grid environment amounts to reorganizing your software," says Brian Chee, a senior programmer on a 90-node utility cluster being built for the bioinformatics department at the University of Hawaii. "The problem needs to be divided up into chunks and assigned to each processing node, and the transfers of data and results needs to be organized synchronously or asynchronously. When you're linking two grids, the problem gets divided into two, sent to each grid, and is there again subdivided onto those grids. Results are reassembled the same way."

So how does IT plan for a migration to the utility model? "Start by understanding your application diversity," Penguin's Becker advises. "What runs on what? This is important, as you'll need a management solution that works for each platform." He also advises moving to a standard hardware platform, the Intel/AMD model being his favourite, for obvious reasons. "Finally, look to move to a single operating platform," he says. "Presently, Unix is the system of choice for all things utility, as you simply have more options under Unix than you do Windows."

Within this framework, begin evaluating all new technology purchases with utility goals in mind. "Don't just look at a single vendor's commitment to utility," King says. "Make sure that every vendor you work with from now on can support as much of your infrastructure as possible." Each technology player should be evaluated against a utility goal that reflects an organization's unique combination business needs.

Although software products such as Oracle 10g are still evolving, the hardware platforms are maturing rapidly. But even without specific software support, products such as Knotts' ClearCube have plenty of benefit to offer all by themselves, enabling IT managers to begin evaluating a move to a utility-based data centre today.

"Sure, there are still important tools missing," Forrester's Gillett says. "But the cost benefits of this architecture are simply too compelling to ignore."

Join the newsletter!

Error: Please check your email address.

More about ACTAMDApacheBioinformaticsCommonwealth Bank of AustraliaCommonwealth Bank of AustraliaCommonwealth Bank of AustraliaCSC AustraliaCyberlynxDellEDS AustraliaEMC CorporationHarvard Business SchoolHewlett-Packard AustraliaHISHostworksHPIBM AustraliaIDC AustraliaIntelOraclePenguin ComputingPinnacleSAP AustraliaSDLSocketSpeedStrassmannSun Microsystems

Show Comments