"I couldn't disagree with you more!" I glanced to my left as two members of the panel I was chairing at InfoWorld's CTO Forum in Boston loudly disagreed with each other. It wasn't the first time this had happened, and if the session went on much longer, it wouldn't be the last. The panelists calmed down and explained their reasoning, showing that they weren't as far apart as it had originally seemed.
What had brought two security experts to the edge of blows? It was a minor disagreement on how to approach some issues involving DRM (digital rights management). But before that, it had been disagreements on patch management, firewalls, and perimeters. Why the disagreements? Because there is more than one way to approach enterprise security.
Take Peter Tippett's views on Windows updates, for example. Tippett, who is chief technologist of TruSecure Corp., believes you should update your Windows machines no more than twice a year. To many -- including the people who run the SANS Institute's FBI Top 20 list -- this is heresy. Every company, they advise, should keep their patch levels up to date at all times.
Tippett disagrees. As he explained it, your Windows servers shouldn't be so exposed that they're vulnerable to attacks in the first place. Tippett contends that if you manage your network properly and perform the other functions you should (screening of e-mail and potential virus-laden binary files), then constant updates aren't that much of a big deal. Yes, you should perform updates when they promise more stable operation, but the nearly constant flood of Windows updates can be safely ignored for a while.
Tippett and others make some good points about certain assumptions regarding enterprise network security -- sometimes we can spend far too much time and money on things that aren't worth protecting, and not enough time on the things that are valuable.
Other panelists had other suggestions: Maybe the time is coming when the long-revered network perimeter had outlived its usefulness. Perhaps it would be better to connect most of the network assets in an enterprise to the public Internet, and simply protect the individual elements, thereby saving the trouble and expense of monitoring the really important stuff for systems that are really important.
For example, protecting your CRM servers is probably worthwhile. Protecting your core databases is probably worthwhile. But do your Web clients and servers need to be behind a corporate firewall? Maybe not, if you protect them with an application-specific firewall and instead use something for the operating environment (secure OS candidates such as BSD, VMS, NetWare, or in some cases, a pure hardware solution) that's not easily compromised.
So how much validity do these ideas have? It's not easy to say. But it is true that creating a perimeter that protects everything is really hard -- perhaps impossible -- to accomplish. And it's not difficult to protect individual workstations from a variety of depredations, especially if you still screen their e-mail.
The lesson: You have to be flexible when you're designing your security solution. Just because something seems like a good idea on the surface doesn't mean it is. And just because something seems foolhardy, doesn't mean it's a bad idea. It all depends on what you're doing, and what your exposure is.
Of course, finding the answers to that can be pretty tough, but it's important that you don't find yourself following the same old path simply because it's the one you're used to.