Gigabit Ethernet to the desktop isn't for every infrastructure, but it's become a popular push both by switching vendors and network admins. And no wonder: Servers are generally using bonded gigabit links to the network, the cost of Gigabit Ethernet closet switches are dropping, and many corporate desktops are now shipping with Gigabit Ethernet NICs by default.
The problem is that many infrastructures can't push much more than a gigabit or two from the core to the closet switch itself. Thus, 48 Gig ports ride 1Gbps or 2Gbps uplinks back to the core, which can significantly reduce the available throughput within that closet.
To combat this issue, Cisco recently revealed a new supervisor engine for its Catalyst 4500 series modular switch, specifically aimed at the 4503, 4506 (which I tested), and 4507R. Sporting two Xenpak 10 Gig ports, the new Cisco Catalyst 4500 Series Supervisor Engine II-Plus-10GE is really an edge supervisor, providing rudimentary routing capabilities but extensive QoS support. It's not really suitable for core switching duties, given the lack of routing protocol support beyond RIP (Routing Information Protocol), but in the lab it made an impressive showing for duties at the network edge.
A few trillion packets, give or take
In order to put the new Catalyst 4506 with Supervisor Engine II-Plus-10GE though its paces, I teamed the switch up with a Cisco 4948-10GE switch with full core switching and routing functions. The 4948-10GE has 48 Gig ports and two 10-Gig Xenpak uplink ports. To test the 4506, I relied on a Spirent TestCenter SPT-5000A armed with 16 Gig copper ports and two 10-Gig ports. The SPT-5000A proved absolutely invaluable throughout the testing and allowed me to really stress the 4506 and the new supervisor under a wide variety of simulated conditions.
First, I linked the 4948 and the 4506 via a single 10-Gig connection and ran the Spirent 10-Gig test modules into the remaining 10-Gig ports on either switch. Based on IP with varying packet sizes, the tests were aimed at maxing out the single 10-Gig uplink between the two switches. Throughout this test, the packet loss was absolutely zero, and wire-rate 15 mpps (million packets per second) was consistently achieved.
Next, I kept the single 10-Gig link between the two switches and ran eight copper Gig connections to each switch, balanced among three six-port Gig blades on the 4506, and across several ASICs in the 4948. Running fully meshed throughput tests with packet sizes ranging from 64 to 1,518 bytes between the two switches, I again witnessed wire-rate performance with packet loss measured in hundredths of a percent. With this same test bed, I ran more tests to exercise the address learning and broadcast forwarding functions, and again the 4506 performed flawlessly.
Per VLAN, per port
It was time to give the per-port and per-VLAN QoS a try. I constructed a rather large QoS match list comprising more than 500 TCP and UDP (User Datagram Protocol) ports and bound that ACL (access control list) to a QoS configuration on every Gig port on the 4506. The QoS parameters were to limit the bandwidth of any inbound or outbound TCP connection that matched on the ACL to 20Mbps, dropping packets that exceeded this threshold. Again running with the 16 Gig connections to either switch, I ran another meshed test with the Spirent TestCenter SPT-5000A. I still couldn't make the 4506 break a sweat.
I then adapted the test on a per-VLAN basis, assigning several VLANs to each active port to simulate a VoIP/workstation scenario, this time limiting the bandwidth in a similar fashion on one specific VLAN, and reran the tests. Again, the 4506 performed with aplomb, successfully keeping up with the traffic flow at the highest levels that I could push it.
In another test run, I began the meshed streaming test with QoS disabled on the 4506 and then enabled QoS halfway through the test. There was a several-second hesitation in the CLI but no other ramifications of popping the clutch in this fashion, as might be necessary in production. The limiting began immediately and performed as expected.
I performed further testing using EtherChannel to bond the two 10 Gig uplinks between the switches and again running meshed 16-port Gig throughput and forwarding tests. The balancing between the uplinks is fully configurable and dealt well with the load presented. Using that test bed and 802.1q trunking between the two switches with the 4948-10GE as a switching core, I spanned the test links across eight VLANs and again ran throughput tests. Again, the resulting packet loss was measured in hundredths of a percent, or basically indistinguishable from wire-rate 10 Gigabit Ethernet.