System complexity issues raised at IBM conference

Conference panalists paint a dire picture of IT systems' taking on more and more complexity

Technology experts at an IBM event Wednesday acknowledged the problem of growing IT complexity, noting causes and consequences but also suggesting approaches to lighten the burden.

Hailing from different segments of IT, panelists at IBM's "Navigating Complexity: Doing more with less" conference in San Jose, California. painted a dire picture of IT systems' taking on more and more complexity. Panelist Harrick Vin, a vice president at Tata Consultancy Services, noted that IT shops must deal with many problems, such as security compliance, root cause analysis, and overlapping of functions.

"Unfortunately, dealing with these classes of problems is becoming harder and harder over time," said Vin, who also is a computer sciences professor at the University of Texas at Austin. He cited as an example of complexity a top-tier bank with more than 30,000 servers and 200,000 desktops.

Compounding the situation is the fact that different persons deal with different parts of the overall problem in isolation, he said. "Essentially, what happens is we only have a silo-based understanding of what is going on," Vin said.

Complexity has arisen from evolution, he said. Operating systems, applications, and workload types and volumes keep changing. "The requirements that users impose onto these systems also continues to change," said Vin.

Systems must constantly adapt to changes, he said. "The state of the art really is reactive firefighting," Vin said.

IT has lost control over systems, and there is a lack of agility, he said.

Panelist Peter Neumann of SRI International's Computer Science Lab said old mistakes keep being repeated even if issues like buffer overflow have been fixed.

"The problem is that we keep going through the same problems over and over and over again," Neumann said.

The Multics platform fixed the buffer overflow problem in 1965, but people ignore it, he said. Meanwhile, helpful developer tools are not being used much.

Single points of failure have presented serious problems, such as with the collapse of ARPAnet in 1980, said Neumann.

"What we really need is some sort of approach to complexity that talks about sound requirements," and features predictability and good software practices, he said.

Complexity can be managed and principled, and composeable architectures are needed, Neuman said. He cited examples of what he called principled systems: Multics, with its ring structure; PSOS (Provably Secure Operating System), featuring security, and the SeaView multilevel secure database management system.

"Foresight has enormous payoffs, and it's inherently missing in a lot of the work that's being done," said Neumann.

The lack of understanding about complexity was stressed as an issue by panelist Alfred Spector, a technology consultant. He asked what can be done to change computer science to address complexity.

RosettaNet was cited as an example of a successful technology that nonetheless has become too complex. "Is it sensible that their purchase order has 551 XML fields?" Spector asked.

IBM, for its part, has with its autonomic computing initiative placed one complicated system in charge of watching another; this might be a correct approach, said Spector.

He suggested establishing objectives for system design, doing interdisciplinary research, and setting objectives.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about IBM AustraliaRosettaNetSRI InternationalTataTata Consultancy Services

Show Comments