The idea behind the technology known as autonomic computing is that corporate resources such as PCs, servers and software will take care of themselves - handle configuration, identify and fix ailments, allocate and optimize resources, and protect themselves from harm. The theory is that the more components can manage themselves, the less the burden that falls on IT staff.
Last week IBM re-emphasized how much it believes in the technology by forming an autonomic computing division dedicated to expediting the addition of self-managing and self-correcting features throughout its products and services. It's a step in the right direction, analysts say. But some users remain skeptical about the prospect of self-handling IT gear.
Autonomic computing is not new to IBM. The company announced its eLiza computing initiative in April 2001, and already some self-managing features are built into IBM products, such as its Tivoli management line and forthcoming DB2 Version 8 database software.
IBM has not disclosed its investment in autonomic computing, but analysts estimate the company is spending more than US$500 million per year.
Nor is Big Blue alone in pursuing self-healing computing efforts. Sun last month shed some light on its touted N1 initiative to ease network management. Its first N1 deliverables will include software that helps group servers and storage hardware for centralized management, followed next year by tools for provisioning application resources, Sun says. For its part, Hewlett-Packard has its Utility Data Center architecture.
But the creation of a dedicated autonomic computing unit suggests IBM is stepping up its efforts.
The fact that IBM has established a division devoted to autonomic computing and made someone responsible for strategy is telling, says analyst Jasmine Noel, principal of JNoel Associates. IBM can ensure that different groups are working toward the same goals, for example by aligning Tivoli staff and hardware teams that both are developing management software.
Alan Ganek, former vice president of strategy for IBM Research, will lead the new autonomic computing unit. It will coordinate research and development efforts among IBM's hardware, software and services teams working to devise smarter computing systems. The effort will include design centers where customers can develop and test autonomic technologies, IBM says.
Drake Emko, computer programmer the University of Florida's Northeast Regional Data Center in Gainesville, says some aspects of autonomic computing seem practical, but not all.
"I think autonomy is a good idea for certain things, such as rerouting network traffic to increase availability," Emko says. Self-configuring and self-optimizing systems "are achievable, at least to a certain extent, and could save administrators countless hours of grunt work," he says.
But he's more skeptical of the self-healing and self-protecting goals of autonomic computing. "I'm not confident that autonomy in fundamentally unpredictable fields such as security and bug fixing are feasible goals," he says. It's hard to imagine an autonomic solution that can foresee all the problems that might occur in a system and protect against all types of attacks, Emko says.
Ruslan Zenin, senior system architect at UBS Bank in Ontario, echoed these sentiments.
"It looks perfect in theory," Zenin says. "However, when we jump back to reality we have to deal with many implementation-specific 'small problems' [that] might grow into monsters that could turn into showstoppers."
If the vision of autonomic computer were to be realized, Emko worries about the false sense of security it will give administrators and managers. "If a system can configure, run and maintain itself, administrators will have less incentive to learn the system in depth," Emko says.