While we manage our businesses' technology infrastructure, scientists and students at more than 100 universities and scores of companies are working on a new version of the Internet. It goes by several names -- Internet2, Next Generation Internet or Abilene -- depending on which group of universities and companies is testing what. But at its essence is a flurry of experimentation with new protocols and higher speeds. Much higher speeds.
A consortium of universities and companies called the University Corporation for Advanced Internet Development (UCAID) is coordinating most of the experimentation. Many of the experiments connect to the National Science Foundation's high-performance Backbone Network Service, managed by MCI WorldCom, which runs today at 622Mbit/sec but soon will be running faster than 2Gbps. Several gigaPOPs -- high-speed points of presence for access to Internet2 -- are in operation at universities that offer 622Mbps access to customer end points; at least one metropolitan area's fibre network in the experiments is already running at 2.488Gbps.
Although it's not officially part of the Internet2 or UCAID experimentation, another development, Project Oxygen, also will affect the performance of Internet2. The project is a global undersea fibre cable network with 99 landing points in 78 countries that is planned to have a minimum throughput of 1.28Tbps. The engineering began last quarter and completion of the first phase of the network is slated for 2003. More than 80 carriers have promised to invest in the network.
Originally, Internet2 was mostly concerned with testing the next version of the Internet Protocol (IPv6) and finding new ways to route broadcast messages. But now the most intense focus seems to be on developing protocols for permitting different quality-of-service levels. These two areas, along with the prospect of much higher speeds in the network, portend a lot of change for managers of corporate networks and commercial Web sites.
Some time after 2000, we'll have Internet options that rival leased-line and virtual private networks for reliability and security as well as oodles more bandwidth. The new protocols mean that we can guarantee the level of service needed for the various parts of our networks; they also mean we can distribute video, audio and data signals in broadcast fashion in ways we can't today. With Internet2, much of the distinction between LANs and WANs will go away, voice over IP will become a viable option for corporate voice networks, Web sites will be orders of magnitude more interactive without hurting the visitor's experience and videoconferencing will become more than a niche application.
Add in some new network management tools, directory services and perhaps even technologies such as Sun's Jini networked-device autoconnection scheme and the idea of thin-client computing may come back to life, with the network handling data and content storage, intermediate servers handling application logic and client devices handling complex user interfaces, applet management and caching and background tasks such as encryption and compression.
In short, the very infrastructure of computing could change, which would make many of our specific technical skills -- such as LAN troubleshooting -- obsolete and some of our generic management skills -- such as managing change -- critical. See you around cyberspace2.