Securing your network by pillorying problem users

Maybe we've been going about IT security the wrong way. Security guru Bruce Schneier thinks so. Last week at the Hack in the Box conference in Kuala Lumpur, Malaysia, Schneier told the crowd that technical security measures have proved to be not enough -- it's time to apply economic pressure. For example, banks will only get serious about identity theft if they're legally liable for unauthorized withdrawals, and software vendors will take security seriously only when they can be sued for loss because of buggy software.

"Look for the economic levers," Schneier said. "If you get the economic levers right, the technology will work. If you get the economics wrong, the technology will never work.

"The organization that has the capability to mitigate the risk needs to be responsible for the risk."

Should software vendors pick up the tab when they botch their products? Probably -- but don't hold your breath waiting for that to happen.

On the other hand, maybe there's another way for corporate IT shops to use the same principle. Consider the security troubles we have because of users who engage in risky behavior like opening unknown e-mail attachments or visiting dangerous Web neighborhoods. We tell them not to do it, but they "forget." Their managers won't crack the whip because that's more trouble than it's worth -- at least for them.

So there are no consequences for the risky actions of those problem users. And we keep beating our brains out to save them from themselves, clean up after them and keep all our systems working as smoothly and invisibly as possible.

But suppose we tried something different. Say that instead of handling security problems invisibly, we made them highly visible to users.

Suppose when one of those problem users opened a virus-laden attachment or triggered a firewall reaction or plugged a thumb drive into a USB port, that didn't just create an entry in a security log.

Suppose it instantly shut down network access for the user's entire workgroup.

Oh, there would be screams. We'd hear them at the help desk almost immediately. And for once, those battered souls would know exactly, word for word, what to say: "It looks like Charlie downloaded a virus, and your group was cut off to protect the rest of the network. We're working to clear the problem now."

Not "We cut off your network because" -- that makes it sound like it's IT's decision. And not "Because there was some problem with someone" -- we want Charlie's fellow users to know exactly who has cut them off.

Is that sneaky? Sure. Draconian? It has to be. It will work only if the consequences are immediate and -- at least to all appearances -- automatic. (Just how automatic it actually is will be up to IT.)

And effective? Just ask yourself this: How long will it be before Charlie's co-workers start screaming at him every time there's a network problem? They'll be far more effective at changing Charlie's risky behavior than anyone in IT will ever be.

Look, we've been piling security technology onto our systems and networks for years. But Schneier is right. It's not enough. It'll never be enough. We can barely hold our own against hardware problems and external attackers. And as long as we keep struggling to hide the consequences of what some problem users do, they'll keep doing it -- and keep putting everyone at risk.

By turning that situation inside out and making those consequences very visible, we may be able to get the rest of our users to accomplish what we can't. They may not care one way or another about security problems, but they know what they don't like. And we can make sure they know who to blame.

We've got the technology. And we know who has the leverage to get those Charlies under control.

Maybe it's time to start pulling those levers.

Join the newsletter!

Error: Please check your email address.
Show Comments

Market Place