The realities of utility

Attention data center staffers: Utility computing is coming, but don't start planning your retirement just yet. In the utility computing dream, compute resources flow like electricity on demand from virtual utilities around the globe -- dynamically provisioned and scaled, self-healing, secure, always on, efficiently metered and priced on a pay-as-you-go basis, highly available and easy to manage. Using the latest clusters, grids, blades, fabrics, and other cool-sounding technologies, enterprises will plug in, turn on, outsource, and save big bucks on IT equipment and staff. They won't care where their J2EE (Java 2 Enterprise Edition) or Microsoft Corp. .Net resources live anymore.

In an era of server proliferation and underutilization as well as rising complexity and management costs, it sounds perfect. Heavyweights such as IBM Corp., Hewlett-Packard Co., Sun Microsystems Inc., and Compaq Computer Corp. have all lined up behind it. C-level execs love the concept. There's just one problem -- the technology hasn't caught up to the vision. Here's why.

Reality #1: The technology's not there yet for external utilitiesJust as ASPs and SSPs (storage service providers) got off to a slow start, so will the external computing utility. Recent announcements of so-called "utility" outsourcing deals (such as IBM's US$4 billion deal with American Express) are just old wine in a new wineskin. We have a new pricing model, but we have the same old assets and on-site management services.

Instead, utility computing will start inside the firewall, enabling IT departments to offer utility-style services to business units, such as dynamic and scalable resource provisioning, allocation, monitoring, and per-unit billing. Intracompany utility services will start with individual server clusters and broaden to the datacenter and perhaps the campus as the point of control. Companies will use the dynamic provisioning software available today from vendors to speed deployment and cut management costs in the datacenter. It'll be like rolling out SANs (storage area networks) all over again, only this time providing a virtualization layer for servers.

Why not external utilities? Forget the fact that IT departments want to control their own infrastructure, or that ISVs don't want to sell pay-as-you-go licenses to outsourcers. The real issue is that the provisioning, scheduling, security, and policy-based management protocols needed to farm out compute jobs are barely on the drawing board, in the form of Web services and open standards grid computing proposals such as Globus's OGSA (Open Grid Services Architecture). It will be years before the infrastructure's there to get enterprises comfortable buying "utility" cycles for mission-critical apps.

Reality #2: Utility is hard to do in heterogeneous datacentersIn the utility dream, one virtualization layer handles all your compute, storage, and network resources, regardless of vendor and OS. There's one management system, with one console and a GUI where you can provision, track, and manage a new topology with just a few clicks. There's a workflow engine to enforce dependencies, and to enable back-out of compute jobs or deallocation of resources.

This dream will unfold slowly because it threatens to commoditize the hardware vendors. Sun (N1), HP (Utility Data Center), IBM (eLiza, sourcing), and Compaq (Adaptive Infrastructure) have all introduced their own utility programs and provisioning management software. Although they pay lip service to heterogeneous environments, and in some cases support multiple flavors of Unix or Linux, these grid, clustering, pay-per-processor, and dynamic provisioning offerings are clearly optimized around their own platforms and tied closely to their existing management products. There's real value here if you're a big, single-vendor shop, but the rest of us will have to settle for half a loaf.

Enter startups such as Terraspring, Ejasent, and Jareva, and MSPs such as Loudcloud and Exodus, which are building virtualization software to support dynamically scalable, heterogeneous computing utilities. "It's a huge technology problem," says Ashar Aziz, Terraspring's CTO, who says that the company's 1 million lines of code include "the moral equivalent of a device driver" for every hardware platform, and SAN-based boot control leveraging SCSI infrastructure, to avoid having to get inside a kernel or any specific operating environment. Aside from basic standards such as SNMP for basic management, and XML for storing topologies, these vendors don't have much foundation to build on. Whether or not such offerings can thrive will depend on how much benefit enterprises can derive from the partial solutions of the larger vendors.

Reality #3: Utility works better for some applications than othersIn theory, utility computing should support the scaling of almost any kind of application, using dynamically allocated resources. But in practice, there are some thorny problems involved in divvying up computing tasks among dynamic resources -- problems that application vendors will need to step in to help solve.

Today's business applications often run on dedicated (and therefore underutilized) infrastructure, one app per machine. The utility computing model requires technology for "problem division," or figuring out how to efficiently run apps using cycles from multiple CPUs and hardware platforms. In the case of vertical scaling (on a single box), this technology has been around a long time in the form of logical partitions such as IBM's LPAR or Sun's Dynamic System Domains.

In the case of horizontal scaling, however, not all apps are created equal. Technical computing applications such as DNA sequencing can be easily divvied up and farmed out to multiple machines. Likewise, Web and app serving in an n-tier architecture, or other apps where all the instances are alike, such as e-mail, are conducive to problem division as long as demand is load-balanced among instances to ensure consistent performance. But core back-office and custom applications are harder to scale horizontally, and therefore won't fit as well into a utility model.

"You can't problem-divide Oracle," says Rajeev Bharadhwaj, Ejasent's CTO, who adds that apps with huge amounts of application logic, multiple data stores, and complex algorithms are also problematic. Most enterprise applications, unlike the scientific apps that have been run on grids in the academic community, simply aren't designed to be run in resource-sharing environments. "I can't go and slice up a [BEA] WebLogic server among five customers because I wouldn't know how to meter and bill it," says In Sik Rhee, Loudcloud's co-founder. "The standards just aren't there."

Reality #4: Utility security is a work in progressEven inside the firewall, enterprises won't adopt utility models until they're confident the shared-resource model is secure. "As you virtualize this environment for me, how do I know that nobody else is going to be able to see my data?" asks Vijay Rathnam, Exodus' director of products and technology.

Traditional security is based on creating DMZs between boxes, and controlling root access to those boxes. But in the utility world the DMZs must be created between application instances. HP, for example, is using port-based VLANs (virtual LANs), enforcing security at the packet and switch level. "Only the traffic from a particular service is viewable by that service," says Nigel Cook, chief architect at HP's Utility Data Center. Terraspring and Ejasent are both working on creating fabric-based partitions and barriers that even someone with root access can't breach -- barriers that work, for example, by hiding process IDs and other identifiers. But it is unclear whether these methods will satisfy security-focused IT professionals.

It's still the early days of utility computing. The promise is great, but there are few true utility deployments and thorny problems lie ahead. Standards must be developed for the entire virtualization layer, including provisioning, management, security, performance, measurement, and billing. Furthermore, application vendors must get into the act to enable enterprise apps to share compute resources that can be shared by multiple customers, and hardware vendors must provide more support for one another's platforms. Expect incremental advances rather than breakthroughs. Computing is not as easy as electricity.


Utility computing

Executive Summary: Utility computing promises to transform enterprise infrastructures by providing on-demand, pay-as-you-go, dynamically provisioned and scalable, easy-to-manage commodity compute resources. IT decision-makers should be aware, however, that key pieces of this technology puzzle are still missing or underdeveloped, and must design their utility computing efforts accordingly.

Test Center Perspective: Despite real progress in technologies for dynamic provisioning, server clustering, and grid computing, utility technology today is best suited for deployment inside the firewall, in single-vendor environments, and for applications that can scale horizontally and don't require tight security. Additional standards and protocols must be developed, with involvement from ISVs, to enable utility computing to reach its broader potential.

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about American Express AustraliaBEACompaqCompaqHewlett-Packard AustraliaIBM AustraliaLoudcloudMicrosoftOGSAProvisionTerraspring

Show Comments