Cisco's Catalyst 6500 raises the stakes

Cisco Systems might be a relative latecomer to 10G Ethernet switching, but it's hardly playing catch-up. Our exclusive lab tests show that new line cards and management modules for Cisco's Catalyst 6500 switches push the performance envelope in a number of ways:

- Line-rate throughput with low delay and jitter. The Catalyst becomes only the second product tested to fill a 10G pipe.

- Fast failover. The Catalyst set records for recovery times.

- Perfect prioritization. The Catalyst is the only product that can protect high-priority traffic while simultaneously rate-limiting low-priority traffic.

- IPv6 routing. In the first-ever public test of IPv6 routing, the Catalyst moved traffic at line rate even when handling 250 million flows.

The Catalyst's stellar performance in our tests, along with its rich feature set, earned it a World Class Award. Simply put, this is the highest-performing 10G Ethernet product we've tested to date.

To ensure an even comparison, we ran the same tests on Cisco's new gear - WS-X6704-10GE line cards and WS-SUP720 management modules - that we used in an assessment of 10G Ethernet products early this year. Tests included pure 10G Ethernet performance; Gigabit Ethernet across a 10G Ethernet backbone; quality-of-service (QoS) enforcement; and failover times. For this review, we added failover and IPv6 forwarding and routing.

In the 10G Ethernet tests, we used Spirent Communications PLC's SmartBits to generate traffic in a four-port, full-mesh configuration. Cisco's 10G Ethernet cards delivered line-rate throughput for all tests. That puts the Catalyst on par with the E1200 from Force10 Networks.

We should note that Cisco's 10G Ethernet cards are blocking - which causes frame loss - when all four ports exchange 64-byte frames between line cards. This was not an issue in our tests because we moved traffic between two ports on each of two cards. We think that's a fair comparison with previous products tested. Most of those had just one port per card, not four, so all previous tests were also across cards. Cisco says the new cards are nonblocking when handling a mix of frame sizes, but we did not verify this.

Delay and jitter with the Cisco 10G Ethernet cards weren't quite as low as previous record-holders from Foundry Networks Inc. and Hewlett-Packard Co., but the numbers were well below the point at which application performance might suffer.

In the worst case (delay for 1,518-byte frames under 10 percent load), Cisco's average delay was 12.4 microsec, compared with 7.5 microsec for Foundry. Jitter was 0.5 microsec, compared with 0.6 microsec for Foundry in a similar test. Neither result will affect application performance.

We also conducted tests the way 10G Ethernet is most likely to be used - as a backbone technology. We built a test bed comprising two chassis connected with a 10G Ethernet link. Each chassis also had 10 (single) Gigabit Ethernet interfaces. We offered traffic from 510 virtual hosts to each Gigabit Ethernet interface, meaning there were 10,200 hosts exchanging traffic in a meshed pattern.

The Cisco setup delivered line-rate throughput at all frame sizes, and delay and jitter again trailed Foundry and HP by an insignificant margin. Cisco's highest average delay (with 1,518-byte frames) was 35.5 microsec, compared with 31.3 microsec for HP. Again, the difference isn't meaningful.

Introducing IPv6

It's important to understand why IPv6 testing matters to enterprise network managers today. The conventional wisdom is that IPv6 is only of interest in Asia, and there mainly as a science project. That perception is misguided, for two reasons.

First, depreciation schedules for backbone gear might run as long as five years, and by then IPv6 deployment is likely to be more extensive than today. Second, companies doing business with the federal government might need IPv6 support much sooner than that. Starting this month, the Department of Defense is requiring IPv6 in systems it evaluates, and other agencies are likely to follow suit.

Cisco's results with IPv6 traffic were nearly identical to those with IPv4. The vendor's new 10G cards delivered line-rate throughput in all cases. Delay and jitter were actually lower with short- and medium-length IPv6 frames than with IPv4, and delay with long frames was only slightly elevated.

All public tests of IPv6 to date have focused on forwarding rather than routing, mainly because IPv6 routing protocols are only now coming to market. Cisco's WS-SUP720 management module supports OSPFv3, the IPv6-enabled version of the popular routing protocol Open Shortest Path First. This was the first appearance of IPv6 routing in a public test.

We used Spirent's TeraRouting software to advertise 100,000 unique routes (each representing one network) over OSPFv3 to the pair of Catalysts. Because address scalability is a major selling point for IPv6, we then sent traffic to each of 250 virtual hosts on all 100,000 networks. This works out to 250 million total flows: On each of two chassis, we offered traffic to 10 interfaces from 250 hosts, each sending traffic to 50,000 networks on the other chassis.

To put this number in perspective, imagine we took the entire population of all U.S. states west of the Mississippi, gave everyone a computer, and routed all their traffic through one pair of Catalyst switches. To Cisco's credit, it did so at line rate, with average delays.

QoS enforcement

Cisco's Catalyst also outperformed previously tested products when it came to QoS enforcement. In this test, we offered three classes of traffic and required the switch to deliver high-priority traffic with no loss, even during congestion.

We also required the switch to restrict low-priority traffic so that it never used more than 2G bit/sec of bandwidth. And just to make things interesting, we emulated 252 hosts on each of 20 edge ports - making 5,040 virtual hosts in all.

Previously, other vendors protected high-priority traffic but couldn't rate-control low-priority traffic. Cisco did both: The Catalyst 6500 not only delivered all high-priority traffic without loss, but also came within 99.99 percent of hitting our low-priority bandwidth enforcement goal.

Failure? What failure?

Our failover tests assessed how quickly a switch reroutes traffic onto a secondary link upon failure of a primary circuit. We tested Cisco's failover with both OSPF and IEEE 802.3ad link aggregation.

In our OSPF failover tests, the Catalyst rerouted traffic in an average of 195 millisec. That's slightly better than the 237-millisec failover Foundry posted in a previous test. With link aggregation, the Catalyst reduced failover time to just 45 millisec.

Cisco says other switches had too easy a time because we used only a single flow in our previous failover tests.

Cisco contends that other switches are flow-based, meaning they build Layer-2 forwarding tables based on the number of flows involved, and thus failover times increase as flow count increases. In contrast, the Catalyst's routing reduces Layer-2 flow counts, which lets it handle arbitrarily large numbers of flows with no performance hit.

We ran the failover test with 2 million flows, meaning that traffic for 1 million flows would be "failed over." The Catalyst's performance improved in this test, with OSPF failover taking 86 millisec and link aggregation failing over in 18 millisec. This validates Cisco's claim that it can support a large number of flows; we'll see how other vendors do in future tests.

Newman is president of Network Test, an independent benchmarking and network design consultancy in California. He can be reached at dnewman@networktest.com.

Join the newsletter!

Error: Please check your email address.

More about CiscoCiscoFoundry NetworksHewlett-Packard AustraliaHPIEEENewmanSECSpirent Communications

Show Comments

Market Place