There's a longstanding discussion about what to do with information about security holes. "Full disclosure" supporters cheerfully post the news, including full details of vulnerabilities and exploits, without bothering to notify organizations first to give them a chance to fix the problems. Others, like GreyMagic Software security engineers, follow the principle of not publishing details until the bugs have been fixed - or unless they receive no response to their alerts. Their advisories are generally respected and acted upon quickly by the product teams whom they inform.
For example, in May 2001, GreyMagic engineers discovered serious problems in my favorite browser, Opera. (As an aside, I like Opera because of its excellent security, highly customizable user interface, its small size, rapid loading, and responsiveness to the user base.) One day after being notified of the problems, Opera issued a revision with fixes.
Individuals also contribute to better security when they inform organizations of security problems. In May, the Associated Press reported that Anonymizer.com rewarded someone who found a security problem with its service for protecting people's anonymity when using the Internet. Anonymizer.com gave the discoverer three years' worth of the service for free and offered the same reward to anyone else who finds bugs in the service. The story appeared in the San Jose Mercury News (found via NewsScan): http://www.siliconvalley.com/mld/siliconvalley/3306644.htm In contrast, there was another incident in May in which GreyMagic informed a company (whose name I'm suppressing because the specific company is not the issue here) of an equally serious problem. That company also offers a prize for bug reports. When GreyMagic staff reported the problem, there was no response at all for a week. At that point, the security group decided to post its advisory even though it figured it probably would not receive thanks, let alone the reward, for its work.
Consider the difference between the open and closed responses described above: a positive, professional response by a software firm not only supports its customers, but it also earns positive publicity. I'm sure most of us are more confident about the integrity of anyone who immediately admits a mistake and fixes it rather than ignoring the problem or worse, denying it. In contrast, the stonewallers fail to resolve the problem and then make it worse by making people angry.
I think the lesson here is that everyone producing software ought to have an effective incident-response plan for dealing quickly with security flaws. Some simple suggestions:
* Technical support procedures need an escalation process to deal with problems of high importance.
* Everyone in a software company should know exactly what to do when they receive any message that claims there's a security flaw in one of their products.
* It should not matter if the message is received by a secretary in the accounting department or by a programmer working on a different product: each employee should know to whom to send the warning and should do so at once.
* In addition, no one should assume that a message has been received by the right person until there's a confirmation from that person or group.
* Ideally, whoever receives the bug notification should take the responsibility for passing the message on in person to the right resources and should verify that the person who sent in the bug gets a prompt, courteous response.