Bailey says Cybertrust frequently encounters "multimillion-dollar" security projects that do little to reduce risk because they fail to protect critical assets effectively. "I recently saw a US$30 million project to put encryption on the network to protect data, but the core data they were trying to protect could be accessed with just a four-digit PIN code. If you're an attacker, it's a lot easier to crack the four-digit PIN than to crack the network encryption," he says.
"You've got to start to understand where the data is and how to apply it back to your business objectives," Sytex's Cole says. "Ask yourself, 'Do we understand what our critical data is and what our critical exposure points are? What are the five worst possible things that could happen that would put us out of business?' " Identifying critical assets and applying the concept of "user least privilege" -- that is, assigning key personnel the least amount of access to that data needed to perform a job -- is a good start, he explains.
It's the software, stupid
Ultimately, experts agree that ridding software of vulnerabilities at the code level is the best defense. The underlying insecurity of software has become particularly nettlesome in the past two years as hackers have developed more sophisticated methods for teasing out remotely exploitable holes.
One example is the increased reliance on software "input fuzzing," a kind of software testing that throws a steady stream of input at software applications in the hope of creating a software exception and uncovering an underlying vulnerability, Booz Allen Hamilton's Ritchey says. With serious organized-crime money behind such activities, the number of application holes has increased, even for well-vetted applications such as Microsoft's Internet Explorer and Office.
With the shift in focus to application holes, sophisticated application attacks have become part of the regular repertoire for hackers. "People literally have hundreds of exploits available, and it's changed the calculus. They've gone from the musket to the machine gun," Ritchey says. According to Cybertrust, new hacker tools for application penetration outnumber those for network penetration by 2:1 in the past year.
Making applications more resistant to attack would solve many enterprise IT headaches at once. "Look, you have a firewall to begin with because your applications were written lousy. It's a recognition, after the fact; 'Oh my god, our stuff sucks we need to protect it.' If the applications were good, you wouldn't need a firewall because there wouldn't be any packets that you need to block," says Bruce Schneier, CTO of Counterpane Internet Security.
Unfortunately, the underlying problem of software quality is a vexing one. "We've been working on it for a long time, and we haven't solved it," Ritchey notes. "Once you get past a couple hundred thousand lines of code, the complexity reaches a point where understandability goes out window."
Schneier says that real, provable software assurance won't be available for 20 or 30 years. "We have no idea how to do that. Proving security? Forget it. We don't have a clue. Security assurance as a craft? Sure. But as a science? Not for a long time."
Software assurance methodologies are a wise investment for organizations that put a high value on secure software -- such as the NSA. For other companies that don't put as high a value on security and data integrity, the benefits of instituting security methodologies might not outweigh the costs, Schneier said.
And in the final analysis, that's what matters for companies more than eradicating threats: spending the appropriate amount to get the security they need. For the foreseeable future, that will almost certainly mean relying on a diverse mix of "layered" defenses for external and internal threats.
"If you look at it, physical security is a patchwork of stuff, too. How could it ever change?" Schneier asks.