Life was simple at Layer 3. Because the IP routing function is so straightforward, benchmarking the performance of Layer 3 LAN switches engendered little confusion. But those days are over. With the arrival of Layer 4+ switches, benchmarking will never be the same.
As we climb up to Layer 4 and beyond -- and deal with almost "layer-less" devices such as the cache -- we need to realise that we are evaluating services rather than just layers of software. Significant work is required industrywide to develop meaningful benchmarks.
Truth be told, what we leave behind leaves plenty of room for improvement. The de facto standard goal of achieving wire-speed throughput with minimal-size (i.e., 64-byte) packets has no basis in reality. Why is it important, really? Well, it isn't -- really. It simply serves as a theoretical maximum performance point.
But is that where we want to go with Layer 4 and up -- simply establishing a theoretical maximum (packet or session) rate as a goal and blindly marching toward it? I think not. Where Layer 2 and 3 devices dealt with the packet as the integral unit, Layer 4+ devices see a bigger picture -- often one that cannot be benchmarked accurately by just counting packets.
Now that it's time for new benchmarks, let's not just run numbers for the sake of numbers -- or run tests just because they can be run. Let's make certain that benchmarks are meaningful. And the best way to do that is to work backwards.
The ideal benchmark development process should start at the end by understanding the specific capability that is to be benchmarked. For example, one benchmark might quantify how many sessions per second can be distributed among a set of back-end servers, while a very different one might be used to judge the sensitivity of the switch to the prevailing loads on those boxes. Both benchmarks add to our understanding of the solution set.
As we begin to build these next-generation tests, I believe that we have to embrace a set of guidelines to make sure that benchmarks are not misused and do not detract from our understanding of the leading-edge technology we hope to deploy. Here are some requirements I'd like to propose as the basis for next-generation benchmarks:
-- Precise definition of purpose.
-- Clear explanation of why the benchmark is meaningful.
-- Delineation of what capabilities the benchmark exercises -- and those it does not.
-- Clear mapping of results to real-world scenarios.
Of the four, the last is the most important. It says: "Prove to us that n-thousand sessions per second is relevant to our environment."
Early next month, we'll be meeting with many of the people responsible for developing and espousing benchmarks. Send me your thoughts so I can share them with others at the meeting.
Tolly is president of The Tolly Group, a strategic consulting and independent testing firm in Manasquan, New Jersey. He can be reached at +1-723-528-3300 or firstname.lastname@example.org.