In a decade of testing, The Tolly Group has run a vast array of tests on countless devices from innumerable vendors. But the test I'd most like to run for public consumption is one I've never been asked to perform. I call it the Dark Cloud test -- a scenario in which test conditions are anything but ideal.
One would think that every vendor claiming to offer enterprise-class or carrier-class products would demand such testing to prove its claims.
Think about it. By definition, a test environment is a controlled one. And virtually every test scenario dreamed up by product people presupposes complete and total environmental control and represents best-case situations.
Not that there is anything inherently wrong with showing the best case. If a device can't deliver under ideal conditions, we needn't worry about whether it can perform under adverse conditions.
But such best-case testing gives us precious little information about what will happen when the dark clouds roll in. Even the high-stress tests we've run on leading-edge switches, for example, allow vendors to configure those switches with full knowledge of what conditions the devices will encounter. Nevertheless, here we also learn little about what happens when these devices experience unexpected adversity.
Experience has shown that in networking, as elsewhere, devices do not all respond equally well to adverse conditions. Moving from the showroom to the real world changes everything.
So what, specifically, do I want to see in the Dark Cloud test?
First, I'd like to see how a device performs when it is left in its default configuration state. No matter what vendors say, it is my deeply held belief that users leave boxes in as much of a default state as possible (or would like to) more than they care to admit.
Not only would such a test be a fairly reliable indicator of actual behaviour, but it would probably serve to embarrass some vendors into selecting good default values for their devices. I doubt a network manager exists who has not repeatedly been dumbfounded by the idiotic choices some vendors make for default values.
After this, I'd want to change the default configuration to de-optimise the box completely. I'd want to configure functions that I won't use and see how that affects performance. You'd be surprised how the invocation of an apparently innocuous feature might cause an awe-inspiring plunge in performance.
Wherever you are given the option to choose how buffers or other internal resources will be used, tune them to be deliberately out of sync with your intended use of the device. Then see how the device performs.
Wouldn't it be great if your attempts to de-optimise a network device failed miserably? Vendors always play up how their enterprise-class and carrier-class technology is smart and robust. Such technology should require little or no manual optimisation. Merely by observing and evaluating traffic in real time, such devices should be able to configure resources dynamically for optimal use.
For the truly robust product, sunny days and dark clouds are all the same.
Tolly is president of The Tolly Group, a strategic consulting and independent testing company in Manasquan, New Jersey. He can be reached at +1-732-528-3300, ktolly@ tolly.com or http://www.tolly.com