I got a promotional email from John McQuillan the other day. It was pushing his Next Generation Networks (NGN) conference at the end of October in Washington DC, titled "Rethinking Routing". But the gist of the message seemed to reflect more a fondness for technologies past than a rethinking.
According to John, the reason it may be time to rethink Internet routing is the pending explosion in the availability of optical networking technologies. Wave division multiplexing, dense wave division multiplexing, optical multiplexers and all-optical cross-connects are being or are about to be deployed over the rapidly increasing web of optical fibres crisscrossing the country. These fibres, as a US Federal Communications Commission official noted, are being deployed on a per-fibre basis faster than the speed of sound.
This new network will have the ability to be reconfigured in real time, and that opens the door to new technologies that could reroute IP traffic in response to congestion in the network. The multibillion-dollar prices that have recently been paid for companies in the optical network field -- even companies without any real products -- show that John is not alone in his thinking of the importance of this area.
I don't disagree with the above, but I do wonder if John's leap from the ability to do agile networking on optical networks to the usefulness of doing much of it is backed up by the needs of the network of the future. I come down to the same issue that has kept me from endorsing a number of "advances" in internet technology in the past. Most of these advances are trying to remake the datagram-based internet into a circuit-based clone of the phone network. But internet traffic has little in common with phone traffic; even internet-based phone traffic may have little in common with traditional phone traffic.
Circuit-based technologies such as ATM, SONET and Multi-protocol Label Switching are used in large US ISPs these days to balance traffic between pairs of cities. Doing this type of balancing on a real-time basis may become useful in the future, but I question getting too much finer in granularity than city pairs, or maybe a few aggregate quality-of-service classes between city pairs. Because there are few cities where big ISPs have points of presence, this does not amount to that many circuits -- not nearly enough to justify the hype that is going around.
This view is counter to John's and those of many other pundits who feel the ability to be agile will spawn lots more circuits in the internet. But it is consistent with internet history, which has weathered many other attacks on the utility of datagram-based networking. Determining the real trends will take some time and if I'm wrong, I'm sure someone will remember.
Disclaimer: I'll be chairing a session or two and giving a tutorial at NGN, but as far as I know Harvard will not be attending and the above observation is my own.
Bradner is a consultant with Harvard University's University Information Systems. He can be reached at firstname.lastname@example.org.