Future of High Speed Networks

FRAMINGHAM (02/10/2000) - Can government and academia really create a next-generation Internet with their two high-speed test bed networks, vBNS and Abilene? The jury is still out as National Science Foundation (NSF) funding for the projects dries up at the end of March. Both networks are still highly underutilized, neither has spawned many advanced applications, and any technology transfer to the commercial sector has gone largely unnoticed.

In 1995, the NSF commissioned MCI WorldCom Inc. to build the very high-performance Backbone Network service (vBNS). The purpose of the network was to give leading U.S. research institutions high-speed connections to each other and to the NSF's national supercomputing facilities at the University of Illinois in Urbana-Champaign and the University of San Diego.

The NSF wanted to repeat the success of ARPANET and NSFNet, which ultimately morphed into the Internet,create a much faster and more intelligent network, develop new classes of applications that can exploit such an infrastructure, and then watch entrepreneurs take the technology and run with it.

The vBNS backbone started out as an ATM cloud, but today it consists of two parallel networks - one using Cisco routers to run IPv6 over ATM at 622M bit/sec and the other using Juniper M40 routers to run IPv4 over SONET at OC-12 and OC-48. According to MCI WorldCom, the backbone encompasses some 20,000 route miles and serves more than 100 customers.

A second high-speed backbone began vying with vBNS for attention from the research community when the University Corporation for Advanced Internet Development's Internet2 group launched Abilene in early 1999.

"The idea wasn't to replace vBNS, but rather to provide a complementary network that lets Internet2 members test advanced applications and capabilities in an environment that more closely resembles the commercial Internet," says Internet2 spokesman Greg Wood. Internet2 got Qwest to light up some of its dark fiber and Cisco and Nortel Networks to donate routers and SONET gear.

Abilene is a packet-over-SONET network running IPv4 at OC-48 and spanning some 13,000 route miles. Regional giga-points of presence (POP) connect multiple Internet2 institutions. Of Internet2's 170 university members, 86 are now on Abilene; 138 of them are connected to either Abilene, vBNS or both.

The existence of two separate networks has caused some friction, but it also provides a more real-world testing environment. The networks can't be fully peered because only Internet2 members are allowed to use Abilene. However, an agreement between vBNS and Abilene lets institutions that connect to both networks communicate across the combined infrastructure and lets Abilene members access resources that are only found on vBNS.

A number of universities seem to feel more comfortable with Abilene. While MCI WorldCom owns and operates vBNS, the universities themselves run Abilene. Each organization owns its own routers, and the University of Indiana handles day-to-day network operations.

Moreover, vBNS and Abilene members will soon need to find another way to pay for their high-speed connections. Both networks will continue to operate, but the NSF grant programs that subsidize the backbones expire March 31.

Membership benefits

Abilene and vBNS participants have certainly noticed a performance boost. For example, the National Institutes of Health's (NIH) National Library of Medicine has the largest collection of biomedical information on the planet. The library's mission is to disseminate this data, which includes all the genetic sequences of the human genome project, and the vBNS infrastructure is now providing high-speed access.

Researchers at the University of Alabama in Birmingham were used to annoying delays when they accessed biological sequences and genomic databases at the NIH library, but the UAB's Abilene connection changed all that. Now information goes across the vBNS-Abilene internetwork at warp speed and screens come up almost immediately.

The high-speed connection also makes it easier for the UAB to maintain and update local copies of some of the remote data. Gene-sequence analyses that used to arrive on nine CD-ROMs every other month can now be downloaded over the network in half an hour, and the local databases can be updated much more frequently.

However, participants acknowledge that the vBNS and Abilene networks are underutilized, and say that the next-generation applications are still largely missing. "We would like to be able to point to more advanced applications than we can today," admits William Decker, program director for advanced network infrastructure at the NSF in Arlington, Va.

Impatience for these developments is what motivated Qwest to get involved with Abilene. "We have this tremendous underlying asset, and we want demand for it to get created as quickly as possible," says Guy Cook, vice president of Internet services for Qwest in Denver. "The more broadband applications that appear in the marketplace, the better off Qwest will be. Participating in Abilene gives us a first-hand window into these applications as they are spun off."

Telemedicine is one example. During a surgery performed at Ohio State University, Abilene was used to conference with doctors from other parts of the country. Similarly, an MRI machine can scan a patient in one location and send the data to a remote supercomputer for processing, and then deliver the resulting images to a doctor in a third location.

Other applications that have been demonstrated on vBNS and Abilene include HDTV transmission; remote control of telescopes and electron microscopes; and aligning massive distributed databases to look for patterns across them.

High-speed backbones such as Abilene and vBNS "provide a much more supportive environment for the development and deployment of such applications," says Robert Crawhall, director of research network initiatives at Nortel in Ottawa.

"Commercial networks have other priorities, such as solving very complicated network management problems due to heavy traffic loads."

Coming attractions

Internet2 members are working to develop middleware that will tie different databases together and let them communicate. "Right now, the Internet is person-to-machine or person-to-person, and we need database-to-database interaction," Cook says. "Until we solve this problem, much of the promise of e-commerce won't get realized."

Tele-immersion applications are another Internet2 target. Tele-immersion creates coordinated, partially simulated environments at geographically distributed sites so that users can collaborate as if they were in the same physical room. The computers track the participants and the physical and virtual objects at all locations, and project them onto stereo-immersive surfaces.

For example, a virtual workspace might include tele-cubicles that let people interact with each other and objects as if they were all in the same room.

Similarly, a public library might offer a 3-D tele-space in which a library user could experience some historic event and even interact with it.

Participants say multicast technology is getting a big boost from vBNS and Abilene. "A lot of commercial networks are multicast-enabled now, but you don't see much utilization of these capabilities," Cook says. "On Abilene, you do."

As a result, the multicast technology that reaches the commercial market is cleaner, richer and offers a higher level of performance.

And multicast isn't just for pay-per-view TV and other entertainment applications. It also has great potential as a tool for software distribution and network administration. Companies are using multicast to upgrade PCs around a network or to deliver software updates to partners in a supply chain.

If an SAP application has to update a file on thousands of computers throughout an enterprise and uses unicast to do so, it would completely choke the network.

But if the same update is sent via multicast, users won't even notice it.

Cisco and Nortel say advances in routers, SONET equipment and quality-of-service technology are getting pushed to the commercial market faster because of the test beds.

"The test-bed environment lets us deploy leading-edge technology without worrying about the revenue loss that bugs might cause in a production network," says Michael Turzanski, deputy director of Cisco's Advanced Internet Initiative. "We can take more risks and find more bugs before the commercial release."

For example, Abilene's gigaPOPs - fan-in nodes that tie a lot of high-speed campus connections into a high-speed backbone - have been breaking new ground, says Nortel's Crawhall. "We're starting to see this commercially in specialized high-performance services. And the peering relationships Abilene has with the research networks of a number of other countries is teaching us about peering and service-level agreements among high-speed networks."

Test-bed members are also finding that networks start operating differently at such high speeds. Manag-ing a net that is optimized for a few high-performance flows is very different from building and managing networks that are optimized for millions of low-speed flows. And the higher speeds expose a number of new problems - particularly last-mile connections and desktop systems that can't yet match them.

"We have a high-quality physical infrastructure now," Decker says. "What we need is a better logical network infrastructure - one in which applications can ask the network to locate computing or storage resources and then configure themselves around those resources and present services to users."

ROI or ripoff?

But critics say such advances are more likely to come from the private sector.

"The government has far less expertise in picking winners and losers," says Solveig Singleton, director of information studies at the Cato Institute, a Washington think tank. She points to a General Accounting Office study a few years back which outlined the shortcomings of government-funded research and development and concluded that a large portion of it would have been funded by the private sector anyway.

Many industry observers believe that the commercial Internet grew out of government programs largely by accident. By the time the Web took off, the government had largely withdrawn itself from funding. These pundits also point to the opportunity costs of government-funded projects.

"As amazing as the Internet is today, we can't assume that it is better than what would have happened if the tax money used to build ARPA and NFSNet had stayed in the private sector in the first place," Singleton says.

Internet backbone pioneer William Schrader, chairman of PSINet in Reston, Va., agrees.

"Commercial Internet technology and applications have already outstripped all of the vBNS and Abilene activity," he says. "They have no chance of keeping up.

The only reason the NSF investment might not have been totally wasted is that the funds were spent to educate graduate students. Future funding should focus on this, and not much else."

Breidenbach is a consultant and freelance writer in San Mateo, Calif. She can be reached at sbreidenbach@usa.net.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Cato InstituteDeckerEdge TechnologyImmersionING AustraliaInternet2LogicalMANAGMCIMCI WorldComNortel NetworksPSINetQwestSAP AustraliaSECWorldCom

Show Comments
<img height="1" width="1" style="border-style:none;" alt="" src="//insight.adsrvr.org/track/evnt/?adv=bitgblf&ct=0:jkis3bum&fmt=3"/>