Evan Schuman: With Heartbleed, IT leaders are missing the point

The IT response to Heartbleed is almost as scary as the hole itself. Patching it, installing new certificates and then changing all passwords is fine as far as it goes, but a critical follow-up step is missing. We have to fundamentally rethink how the security of mission-critical software is handled.

Viewed properly, Heartbleed is a gift to IT: an urgent wake-up call to fundamental problems with how Internet security is addressed. If the call is heeded, we could see major improvements. If the flaw is just patched and then ignored, we're doomed. (I think we've all been doomed for years, but now I have more proof.)

Let's start with how Heartbleed happened. It was apparently created accidentally two years ago by German software developer Robin Seggelmann. In an interview with the Sydney Morning Herald, Seggelmann said, "I was working on improving OpenSSL and submitted numerous bug fixes and added new features. In one of the new features, unfortunately, I missed validating a variable containing a length."

After Seggelmann submitted the code, a reviewer "apparently also didn't notice the missing validation, so the error made its way from the development branch into the released version." Seggelmann said the error was "quite trivial," even though its effect wasn't. "It was a simple programming error in a new feature, which unfortunately occurred in a security-relevant area."

What Seggelmann did was fully understandable and forgivable. The massive planet-destroying problem is that our safety mechanisms for simple math errors are all but nonexistent. If our checks and balances are so fragile that a typo can obliterate all meaningful security, we have some fundamental things to fix. Let's not forget that when Robert Tappan Morris unleashed the Internet Worm back in 1988 -- the first major instance of the Internet crashing due to a worm -- it was also the result of a math error. He never intended to cause servers to crash, but crash they did.

David Schoenberger, CIO of security vendor Transcertain, argues that the real fundamental security flaw at play here is, bizarrely enough, an overabundance of trust exhibited by IT security folk. Personally, when I think of the best IT security specialists I've worked with over the years, having too much trust is not the first thought that comes to mind. But Schoenberger makes a good point.

"This is going to make people rethink what we're doing. There are so many things overlooked, taken for granted. In the IT world, we've relied on the trust factor for so long," he said. "Just look at these billion-dollar companies who are relying on peer-reviewed open source. We're not taking the time to prove it [is secure] ourselves. Because something mostly works and, as far as perception goes, it works well, it passes all our tests. It sucks the way testing is occurring right now with open source. But I won't even limit it to open source, as this could have happened to a commercial provider. Could have happened to anyone."

Fair and legitimate point, but is there a practical and better way? It's not akin to a company testing its own applications (although if we take mobile apps as a hint, we're not exactly getting an A+ there, either.

Microsoft has been legendary in its crowd-sourcing strategy: An initial software cut is released to millions, and they find the holes. This gave rise to my favorite Microsoft quip, many years old and unattributable at this point, unfortunately: "Here at Microsoft, quality is Job 1.1." The crazy thing is that it generally worked. How did Heartbleed spend two years in full circulation before any security researcher noticed this error?

Some are convinced that the hole must have been noticed by someone. The National Security Agency has been accused of knowing of this hole and exploiting it. The accusation led to the NSA issuing what may be the least credible denial in quite some time: "Reports that NSA or any other part of the government were aware of the so-called Heartbleed vulnerability before April 2014 are wrong," the statement from the U.S. Office of the director of National Intelligence said. "The Federal government was not aware of the recently identified vulnerability in OpenSSL until it was made public in a private sector cybersecurity report."

There are two parts of the full statement ( read it here) where credibility leaches out. First, between the CIA, the FBI, the NSA, the military and let's say 200 other government operations, it's ludicrous to declare that nobody knew about something. How do you know that one Army security specialist didn't know? Not every geeky hole that is discovered is necessarily included in a memo to senior management. Had they said "to the best of our knowledge" or "we can't confirm that anyone here knew about it," that would at least be plausible. It's like my teenager telling me that nobody in her high school uses drugs or drinks. It doesn't pass the laugh test because there is no way she could know such information definitively.

The second concern with the NSA statement is the very last line: "Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities." Nothing in the statement says that no such need was found in this case. This is akin to my daughter following up her no-drugs testimony by saying, "I will always tell you the absolute truth about such things, unless I conclude that it would cause problems for my friends, in which case I would lie."

I'm generally no fan of adding bureaucracy, but it might be time to create formal review procedures -- ideally, with multiple layers -- with people actively and openly looking for holes. Peer review is great, but for anything as mission-critical as Internet security, we are way past the time to proactively seek out such holes, rather than hoping we stumble upon them.

Evan Schuman has covered IT issues for a lot longer than he'll ever admit. The founding editor of retail technology site StorefrontBacktalk, he's been a columnist for CBSNews.com, RetailWeek and eWeek. Evan can be reached at eschuman@thecontentfirm.com and he can be followed at twitter.com/eschuman. Look for his column every Tuesday.

Read more about security in Computerworld's Security Topic Center.

Tags securityMalware and Vulnerabilities

More about DeltaFacebookFBIMicrosoftNational Security AgencyNSAStarbucksTopic

Comments

Comments are now closed

Microsoft wants you to forget Windows 8

READ THIS ARTICLE
DO NOT SHOW THIS BOX AGAIN [ x ]