For some time, we have been losing the battle against those who would damage our computer systems. That's because computers are increasingly interconnected and the software they run is more complex. Both factors increase vulnerability to infection and intrusion.
Security measures haven't kept up because they have tended to focus on prevention -- antivirus software and firewalls are all geared toward blocking damage, not repairing it. And they are not all that good at detection because they are generally programmed to recognize known threats, not new ones.
"We've been riding the coattails of 1970s ideas, and the weaknesses are obvious to everybody," says David Patterson, president of the Association for Computing Machinery. "Security problems are glaring."
But experimental prototypes and a few commercial products are beginning to overcome the limitations of these 1970s ideas. Some of them can detect malware and intrusions without relying on hard-coded definitions or known behavior patterns. Others assume that bad things will happen regardless and instead attempt to limit damage and keep systems running.
Detection and prevention
Sana Security sells intrusion-prevention software patterned after biological immune systems. Its Primary Response product uses software agents to build a profile of an application's normal behaviour based on the code paths of a running program. It then watches execution of the program for deviations from the norm. It requires no predetermined signatures or policy rules.
The software stops anomalous behaviour by blocking system call executions. Because the software continually learns, Sana says, it can recognize and allow legitimate code changes. That enables it to minimize false positives, which can be a major drawback of these kinds of security tools.
Sana's technology has its roots at the University of New Mexico, where researchers have developed something of a specialty in "resilient and adaptive computing". For example, they are working on Randomized Instruction Set Emulation, or RISE, which is based on the notion that diversity in code is a good thing. The same is true in biology: Resistance to disease is greater in wild plants, where there is much genetic diversity, than in cultivated ones, where there is much more homogeneity.
RISE makes each system unique by randomly varying some code so that for an attack to spread, it would have to be modified for each computer. Some machine code is "randomized" at the time a process is initiated and then "de-randomized" when it is fetched for execution. In the meantime, malicious code would find the target code unrecognizable.
But IT managers don't have to wait for RISE to be commercialized to get some benefits of diversity, says Patterson, who is also a computer science professor at the University of California, Berkeley. "More than one computer company makes computers, and more than one company makes operating systems," he says. "Cost of ownership is less when everything is identical, but your vulnerability to attack is greater."
Computer security experts have come to recognize that no affordable combination of protections can keep a system completely safe all the time. So they are focusing on how to make attacks less damaging while keeping systems running, albeit sometimes at reduced levels of performance.
Patterson and others at Berkeley are working on recovery-oriented computing (ROC), in which systems do fast, almost invisible "microreboots" of the code experiencing some difficulty -- a buffer overflow, for example -- while an application is running. The key to ROC is logic that watches running processes, senses when something is wrong and then triggers the microreboot before the whole system crashes.
Patterson says there is a natural fit between tools for better detection and prevention, such as Sana's Primary Response, and tools for surviving an attack, such as ROC. "ROC is trying to make recovery fast and inexpensive," he says. "If recovery is expensive and complicated, then your detection mechanism needs to be close to perfect."
Patterson says his research team had an "Aha!" moment while developing ROC. "It was that lowering the cost of recovery makes it tolerable to have a higher false-positive rate."
Another way to keep business flowing is to simply slow an attack so that fewer machines are infected before countermeasures can be employed. As part of its work in resilient infrastructures, Hewlett-Packard Co. has developed virus-throttling software that permits connections from one machine to another at a slow rate -- the way users work, say, at one or fewer connections per second -- but delays or blocks connections to machines when the requests come at a rate of hundreds per second, as they do with modern worms.
The Responsive Input/Output Throttling project at the University of New Mexico is combining different defense mechanisms, an approach that mimics biological defense mechanisms. It uses throttling to limit the rate of connection to other computers. But throttling is made much more flexible by coupling it with agents that learn the normal behaviours of specific combinations of users, machines and applications. "You turn it on and it learns what the rates are for your network behaviour," says Matthew Williamson, senior researcher at Sana and previously a developer of throttling technology at HP Labs.
"Throttling opened the door to thinking about rates of things instead of, 'Is it allowed or not?' " Williamson says. "People in security tend to think in a binary way." But security, and its cost, are not either/or issues, he says.
"Costs can be significantly reduced by having systems that are resilient, and they don't have to work perfectly," he says. "You get quite a lot of value out of 80 percent security."