Of the many factors that have helped drive down the price of Ethernet switch ports, the introduction of stackable communications boxes is among the most important. Work at the Tolly Group's lab has uncovered several areas in which stackables may fall short.
Stackable switches of any topology typically have three significant issues with which to deal. And to make matters worse, all three issues become bigger problems at higher speeds. What is a minor issue to an Ethernet switch becomes a potential show stopper on a Fast Ethernet switch.
The first problem is one of inadequate switch-to-switch bandwidth. While, say, a single 100Mbit/sec uplink between two 10Mbit/sec Ethernet workgroup switches might suffice, it clearly doesn't do the job if those switches are operating at 10/100Mbit/sec. Thus, effectively extending the backplane between switches to allow for nonblocking communication becomes a primary issue.
To solve the first problem, vendors typically trunk Fast Ethernet ports together -- forming one or more high-speed trunks -- to be used between switches. But this solution creates the second problem: overconsumption of ports.
This is a zero-sum game. For every 100Mbit/sec you add to the interswitch capacity, you lose a 100Mbit/sec user port. Of a 24-port Fast Ethernet switch, how many do you dedicate to the interswitch link? Decide on a four-link trunk and you could have 2Gbit/sec worth of user traffic aimed at a 400Mbit/sec trunk.
Even devoting eight ports to interswitch traffic leaves you potentially oversubscribed by a factor of two.
The cost equation for user ports is the second problem. When you end up getting only 16 usable ports from a 24-port switch, the price of the switch doesn't go down. What happens, in effect, is that each user port will cost you more. And having only two-thirds of the ports available for users will probably increase the number of switches you'll need to purchase.
So most network managers will have to live with a potentially oversubscribed link between stackable switches. Given that most of the time there will be some local traffic or some ports will be inactive, interswitch congestion is not likely to be a constant problem. But what happens when performance problems occur?
That creates the third problem -- erratic response to congestion. Ideally, an internetwork device should offer a controlled response to any congestion situation. Perhaps certain ports or traffic streams would receive priority access to the limited interswitch bandwidth. Or, worst case, all streams would suffer equally, and there would be a general slowdown on sessions traversing the congested link.
Unfortunately, the hoped-for graceful degradation does not always occur. Tests show that often random discard is the order of the day. This, of course, would wreak havoc on the session traffic.
Fortunately, some vendors are well aware of these issues. Soon, some next-generation stackables will eliminate or at least mitigate some of these problems. In the meantime, network managers would do well to evaluate their current exposure to these kinds of problems.
Tolly is president of The Tolly Group, a strategic consulting and independent testing firm in Manasquan, New Jersey. He can be reached at +1 (732) 528-3300, firstname.lastname@example.org or www.tolly.com