Recently, a friend of mine was on the phone with a potential customer. The customer was asking questions about some very hypothetical attacks to see if the security product my friend's company had built would respond to them. Eventually, he answered no, the product didn't have a built-in response to these attacks. The customer asked why it didn't...and he replied, "Because we write good code in the first place."
It seems pretty elementary. The best time to address security issues is before they become a problem...we all understand this. The logical place to start preventing security problems is by creating secure programs to begin with.
So why do so few people do it? Why do we see program after program plagued with security holes -- not just one or two, but problems that surface one after the other for years? It seems that as companies strive to add more features to their products and get the products out into the market as quickly as possible, little time, effort, and resources are dedicated to writing good, secure code. Often, even the beta testing process for code has little to do with security; the code is tested to see if it works, but it isn't adequately audited for security holes. Companies seem to believe that security isn't a value-added asset; it isn't a bell or whistle that encourages a user to select that product. It's considered easier and more cost-effective to simply release the software with bugs and issue patches as needed.
It's a dangerous mindset. The huge amount of damage done by viruses, worms, and simple exploits is testimony to the dangers of releasing poorly-written code and then just expecting the end users to apply patches and fixes later. There are resources for writing good code (please see the list below); programmers should put some effort into learning the fundamentals of secure coding. It's one thing to write code that works...it's a far better thing to write code that both works and is secure.
The other day in a development meeting for a client, some of the programmers were discussing ways for the software they're writing to respond to an attack aimed at the management GUI itself. I suggested that instead of worrying about ways to respond to a theoretical attack, they should just write good code to make sure the attack couldn't happen in the first place. If more coders would do that, there would be a lot fewer of the security holes in the software we all use.