Networking at the Speed of Light

FRAMINGHAM (07/26/2000) - Where can you find half the world's atomic particle physicists, the pressure and excitement of a dot-com, and dizzying challenges in networking and data processing? Located just outside Geneva and straddling the border between Switzerland and France, CERN (the European Organization for Nuclear Research) is the world's leading particle physics research institute.

CERN explores what matter is made of and what holds it together by accelerating electrons and positrons to a fraction under the speed of light and then smashing them together. CERN is also the place where Tim Berners-Lee invented the World Wide Web and early implementations of network technologies like Gigabit Ethernet were tested. Network World recently visited CERN to find out more about the network at the heart of this state-of-the-art facility.

Evolution of a network

By the early 1990s, the CERN network had become unreliable and unwieldy. The network was flat and consisted of several 100M bit/sec FDDI rings, daisy-chained 10Base-5 and 10Base-2, and more than 7,000 directly connected nodes. "It was a nightmare," says Jacques Altaber, group leader of Communication Systems within CERN's IT division. "It had grown far beyond its capacity to serve the users."

Part of the reason the network was a mess was lack of centralized planning.

"The physics people often built their own networks," Altaber says. "It was anarchy."

As a result, Communication Systems began several network upgrade projects to provide reliable shared 10M bit/sec connections across the campus. In 1994, the group began replacing the coaxial cable in its 300 buildings and labs with a structured Category 5 cabling system. Today, the campus has 25,000 LAN drops, which connect 15,000 systems. Altogether there are 1,500 hubs located at 70 distribution points across the campus.

Shortly after the cabling project began, IT also set out to convert the existing bridged network to a routed network. This resulted in 700 subnets implemented using 120 routers. The goal was to improve performance and reliability by isolating the backbone from users. Staffers implemented static IP addressing instead of Dynamic Host Configuration Protocol for greater control over the network.

To accommodate increasing traffic, in 1996 Communication Systems built a new switched FDDI backbone comprising 36 100M bit/sec FDDI rings interconnected using four Digital FDDI GigaSwitches to provide 3.6G bit/sec throughput. Cisco 4700 FDDI routers at the edge of the backbone provided distribution points to the cabling system.

By the time the network overhaul was nearing completion in 1997, however, new physics experiments were already stretching the capacity of the new FDDI backbone. For example, a single experiment alone generated 200 terabytes of data per year transferred at rates of 25M to 100M byte/sec over the backbone.

Moreover, network technology was evolving faster than expected. Fast Ethernet was becoming cost-effective for the desktop, and Gigabit Ethernet was beginning to emerge as viable backbone technology.

That August, CERN installed its first Fast Ethernet switch to support one of the experiments. It was such a success that everyone wanted it, and CS soon realized that mass deployment of Fast Ethernet would break the model of the existing FDDI-based backbone.

Upgrading the backbone yet again became a top priority. In January 1998, CS did a pilot implementation of Gigabit Ethernet to provide a single 7-kilometer-long, 1G bit/sec link between two buildings on campus, a distance record at the time. After the successful pilot, CS decided to upgrade the switched FDDI backbone to a fully routed Gigabit Ethernet backbone. "We decided on a routed network because we felt it is much easier to implement quality of service [QoS] in a router than a switch,"says Jean-Michel Jouanigot, head of the campus network group and the person who is overseeing the upgrade. He intends to use multicast QoS to marry CERN's voice, video and data traffic in the future.

Along with the budgetary constraints typical for a publicly funded organization, CERN also had stringent technical requirements for a Gigabit Ethernet router. The product chosen needed to be inexpensive and support open standards and multiple protocols. It also had to be nonblocking and operate at wire speed and support multicast QoS.

The group decided that Enterasys Networks met all those requirements, and chose its SmartSwitch (SSR) 8600 routers to form the core of the new backbone. In February 1999, Communication Systems upgraded the core FDDI switches to an FDDI/Gigabit Ethernet switch and installed the first few Gigabit Ethernet routers. CERN was on the cutting edge at that time because the standard for Gigabit Ethernet wasn't finalized until later that year.

The backbone is now made up of 10 SSR routers connected in a star topology to two central SSR 8600 routers. The central routers are linked together to provide load balancing and redundancy, and one of them is connected to the legacy FDDI backbone. Forty Cabletron SSR 2000 routers sit at the edge of the backbone, and those in turn are connected to workgroup switches such as the 3Com Corp. SuperStack II Fast Ethernet switch.

Communication Systems expects to deploy another four SSR 6800s and 20 SSR 2000s and phase out the old FDDI backbone completely by mid-2001. So far, the entire computer center and 30 percent of the CERN campus has been upgraded to Gigabit Ethernet, with the rest planned for completion by 2001.

The rollout is a lengthy process thanks to the constantly changing research environment. "Right now, we are getting something like 1,000 work orders per month," Jouanigot says. "The requirements of users are changing from day to day, which is the reason that we aren't using [virtual LANs]." But when an upgrade happens, it's done quickly. "Users have only a 90-second interruption as the backbone is switched over for their building or experiment," Jouanigot says. And so far, CERN has never had a problem with the new backbone.

Although the Gigabit Ethernet rollout isn't complete, Communication Systems is already looking toward 10-Gigabit Ethernet to provide the massive bandwidth needed for future experiments associated with a planned upgrade of CERN's particle accelerator. The group has already formed a working group to evaluate upgrading the present Gigabit Ethernet backbone to 10G bit/sec Ethernet.

Another option being considered is 9.6G bit/sec Synchronous Digital Hierarchy using Packet Over SONET, but this is likely to be more expensive than 10-Gigabit Ethernet.

CERN's IT division also faces several other challenges before the new accelerator is turned on in 2005. Data processing is done by low-cost server farms based on dual-processor PCs, and there a dozen farms with about 100 processors per farm. However, in order to support the anticipated flood of data from the upcoming experiments, IT will need to expand these farms by a magnitude of 20 or more. This will require the construction of new buildings to house 25,000 more PCs.

Data storage will be another issue to tackle, as the experiments are expected to generate several petabytes of data per year. Finding the right technology to safely archive this data while keeping it online for analysis is a major challenge. "We would love a breakthrough in data storage technology," says Eric McIntosh, leader of the physics data processing department at CERN.

CERN now uses high-speed tape libraries for offline data storage and SCSI-based network-attached storage (NAS) systems. McIntosh is considering replacing the existing NAS systems with low-cost Integrated Drive Electronics (IDE) disk servers or a storage-area network.

And making data available to physicists around the world will pose yet another challenge. CERN has phased out most leased lines over the last several years in favor of moving experimental data over the Internet. CERN currently hosts two main WAN links to the Internet: a 40M bit/sec link to the Trans European Network, which is mainly for European academic and research traffic, and a 45M bit/sec transatlantic circuit to Chicago to connect physicists in the U.S. and Canada.

CERN plans to upgrade both links to 155M bit/sec, and will eventually boost the European link to 310M bit/sec.

All this work must be done with fewer and fewer people, due to budget cuts.

CERN's IT division will reduce its staff by one-third over the next five years, mainly through attrition. As a result, IT is considering outsourcing more of its systems management and administrative functions and implementing service-level agreements.

And of course, CERN also faces the same IT recruitment challenges that plague the private sector.The organization especially needs Linux developers. Manuel Delfino, the IT division leader, hopes that by collaborating on projects with IT industry partners he can make CERN an exciting place for talented people to work. "Sometimes I joke that I'm trying to create an IT division dot-com at CERN, and some of the young staff are definitely energized by this concept."

How do all these challenges measure up? "Today, I believe we see the light at the end of the tunnel," Delfino says. CERN's IT department has made great strides over the last five years, and is looking forward to the challenges of the next five with lots of excitement and just a little bit of anxiety.

Tulloch lives in Winnipeg, Manitoba, and is the author of several popular IT books, including the Microsoft Encyclopedia of Networking. He also has a background in physics. He can be reached at info@mtit.com.

Join the newsletter!

Error: Please check your email address.

More about 3Com AustraliaCERNEnterasys NetworksMicrosoftSEC

Show Comments

Market Place