FRAMINGHAM (03/13/2000) - In the ready-fire-aim culture of the Web, stopping to assess a problem is seen as a death sentence. Site goes down? Throw servers at it! Throw bandwidth at it! Sure, it costs a lot, but the site's back up, right?
Good plan. It might keep your systems up - for now.
But often, it's the way the site went up, not the reason it came down, that's the problem.
In the rush to build Web sites that connect to internal transaction processing and other networks, nascent e-commerce operations are just reinventing a problem that network operations people have been complaining about for years - messy, unplanned systems.
When eBay Inc. went down repeatedly last year, the problem wasn't traffic; the problem was that eBay wasn't following the right procedure in adding storage systems on the back end.
When a cracker calling himself Curador was able to steal thousands of credit-card numbers from wireless phone seller Promobility.net and online credit-card processor SalesGate.com two weeks ago, it was the known holes in their Microsoft Corp. software, not his evil brilliance, that got him in.
Most online operations, especially those connected to existing businesses, take a mishmash of networks and applications that kind of work together and bolt them to the Internet, hoping the whole mess doesn't come down.
What they need to do is raise their standards - a lot. Instead of a site that manages to just stay afloat from day to day, they need an infrastructure that can stand up to a full-scale cyberstorm. And that will take some changes.
"With e-commerce, your internal operations are published, the application is like glass and IT is not used to the level of security that's needed," says Gary Moore, CEO of network infrastructure consultancy Enterprise Networking Systems.
His business is telling people how to design networks, so you'd expect him to be critical. But he's right.
Every security story we write at Computerworld ends with the same advice: Get the patches and do an audit to make sure you aren't accidentally giving away sys/admin access or something. But too many sites just don't do it.
So do it. And think about some of these other techniques, too:
-- Benchmark your network and your applications. When you run into problems, check to see if it's the application that's causing the problem. Software that was built and tested on a LAN may have latency problems on a WAN or the Web.
-- Separate the traffic on your Web site from the rest of the traffic on the network. Add more security to the Web subnet, and monitor it more heavily than the rest of the network.
-- Use policy-based network management software to limit the kinds of traffic you have on different parts of the network. If the guy in sales who's downloading a 5MB MP3 file from Napster is using the same pipe as your Web site, you're toast.
-- Think about outsourcing the design, management and security of your Web site to infrastructure specialists. You keep control of the look, feel and content; let the pros (who you can't hire or retain in sufficient numbers anyway, right?) handle the underlying network goop.
It's slower, but delaying your next site redesign a month to get the underlying architecture right increases your chance of staying online - without emergency infusions of bandwidth.