An industry blueprint showing how software, hardware and standards work together in autonomic computing systems is expected to be discussed during IBM Corp.'s developerWorks Live conference this week in New Orleans and will also be posted on the company's Web site.
The company detailed on Friday four technologies it has developed related to autonomic computing, its name for computing systems that adjust to workloads, identify and predict problems and make changes to contend with those, and require less human intervention than do the prevailing systems of today.
The blueprint is meant to show customers and developers how to assemble products from a range of vendors to create a "self-managing" autonomic system, IBM executives said.
"The industry really needs an overall approach to the broad-based goals we've described in autonomic computing," said Alan Ganek, IBM vice president for autonomic computing. "You have to have the blueprint for how all of this comes together. No one vendor can provide all of the pieces, so our customers need to see how all of the pieces fit together."
The blueprint represents a "major advance" for developers, he said. The blueprint describes what IBM refers to as "the control loop," which analyzes, monitors and makes changes in autonomic systems, connecting all the possible components within such a system, such as software applications, servers, storage, databases and middleware.
The four specific technologies IBM also announced are:
-- Log and Trace tool for problem determination, which helps to take autonomic systems from figuring out the problems to debugging applications and middleware;
-- ABLE (Agent Building and Learning Environment) Rules Engine for Complex Analysis, which uses a set of algorithms that allows intelligent agents to capture data, and can predict future steps to take based on system experience;
-- Monitoring Engine, which is available in IBM Tivoli Monitoring now, enables root cause analysis for IT failures, server-level correlation of multiple IT systems and automated corrective measures;
-- Business Workload Management for Heterogeneous Environments, which will be out in IBM Tivoli Monitoring for Transaction Performance version 5.2 in the second half of this year, uses the ARM (Application Resource Measurement) standard to determine why bottlenecks happen, using response time measurement, transaction processing segment reporting and learning of transaction workflow through servers and middleware. The software then adjusts resources to avoid bottlenecks.
Figuring out how and why problems occur is a big vexation for IT staff, so IBM is focusing this round of autonomic announcements on what it calls "problem determination." With existing computer systems, IT staff are left to identify problems and try to find a fix for them after, say, a sudden and unexpected server workload surge.
"You're behind the game and you're recovering from a bad situation" under that system, Ganek said.
Using autonomic computing, IT staff would establish performance objectives for the systems they oversee. In the case of an e-business infrastructure behind a Web site, "one of the key things you want to monitor is what is the response time and what is the Web traffic coming in," said Ric Telford, IBM director of architecture for autonomic computing, to give one example. "If the response time slips below a performance level based on a large volume of transactions coming in, that's something you need to take action on."
In the autonomic world, the computer system will do that without human intervention before the spike occurs and response time slows. It will automatically adjust to add more servers to handle workload.
Besides the ARM standard, the blueprint includes the emerging OGSA (Open Grid Systems Architecture) standard. IBM is one of the vendors working on that standard.
More information on the blueprint will be available at http://www.ibm.com/autonomic. DeveloperWorks Live runs from April 9-12 in New Orleans.