Solutions, solutions, solutions. Echoing the real-estate agent's mantra of "location, location, location," network vendors have found that the invocation of solution resonates similarly with their customers.
So enamoured of its power are some vendors that they offer up solutions orientation as a rationale for avoiding participation in product benchmarking. Solutions, vendors say, are what they sell to customers, not boxes. Individual product performance, they add, doesn't matter and only solutions should be tested.
A solution, of course, is just a concept. In reality, it is nothing more than one or more component products arranged in such a way as to get a particular job done. To posit that solutions and components are diametrically opposed notions or that solutions testing and component testing are mutually exclusive is ludicrous. These are just transparent attempts at stonewalling a device testing process that might reveal inadequacies in component products. How, ask yourself, can you build a reliable solution without knowing the actual capabilities of the system's components?
Solutions-only advocates would prefer we concern ourselves only with the high-level objectives and not get distracted by data on individual components. Such an approach flirts with disaster. Consider what occurred when space shuttle designers had inadequate data on the reaction of the booster O-rings to low temperatures. Atmospheric conditions chilled the launch pad just a little too much -- beyond the tolerance of a relatively simple component -- and the result was the greatest loss of life since the inception of space exploration.
In stark contrast to the inadequate component cited above, the government is more frequently criticised for overbuying. Sometimes you read stories about things such as hammers that cost several hundred dollars each. Here, we can find a network analogy, too.
No network I've ever seen or heard of requires frame-processing rates in the tens of millions of packets per second. Yet, we've already benchmarked devices that offer that. Might this be dramatically more than the solution requires? Surely. Is buying such a device the enterprise equivalent of requisitioning a $US500 hammer? Not necessarily.
While I do not dispute the fact that such products deliver more power than one needs today, the key point is whether network managers are overpaying for such performance. To that question, I believe the answer is no.
While solutions advocates might dispute my assertion, it is really up to them to provide a factual basis for their argument. Yes, 10 million packet/sec may be too much, but how much is enough?
If 30 per cent system throughput on a 24-port Fast Ethernet switch is adequate for 90 per cent of enterprise customers, then vendors should go on the record with statements to that effect. In fact, companies should employ modelling, simulation and testing to prove those points to network managers. Simply fleeing from device testing is no solution.
For their part, network managers should let vendors know whether benchmarking information related to specific components is valuable. I'm biased toward benchmarking, naturally, but I don't know how you make decisions without it.
Kevin Tolly is president of The Tolly Group, a strategic consulting and independent testing company in New Jersey. He can be reached at +1-732-528-3300, firstname.lastname@example.org or www.tolly.com.