Surfing the Tsunami

Editor's note: Due to the sensitivity of senior management to publicity about network attacks, names and locations have been changed to protect the innocent.

We will say the event occurred at a major Southeastern research university with approximately 40,000 active machines on the network.

July 20, 2000, a Thursday, began much like any other day for us. There were trouble tickets to handle, minor problems to fix, and new hardware to install.

We had experienced some random, unexplained outages in the preceding weeks, but there were no indications that we were about to spend the next few days suffering through a massive distributed denial-of-service attack.

The first faint signs of the impending attack had whispered down the wires earlier that morning. Router processor loads spiked as the front edge of the packet tsunami rolled across the network. As the storm picked up momentum, enterprise-class Unix machines dropped their network connections, e-mail services ground to a halt and Web servers went into fibrillation. Chaos reigned as the swell of bogus packets threatened to overwhelm our network backbone. To save the network, we had to spring into action. The following account describes how we discovered the attack, identified the machines being used to perpetrate the attack, and what measures we put in place to prevent another attack.

Approaching distributed denial-of-service dayIn the weeks before the attack, some strange outages had occurred across the campus network. Systems that were normally rock-steady were dropping sessions.

Packet loss approached 30 percent on some switches. The problems occurred at random times and were of varying duration.

Users were getting annoyed. The systems staff was giving us the collective evil eye. We were pulling our hair out. Adding fuel to the fire, we had recently upgraded the section of the network where we were having the most problems from switched 10M bit/sec Ethernet to switched 100M bit/sec Ethernet with Gigabit Ethernet uplinks. We were naturally suspicious of the new hardware and spent over a week chasing ghosts in the new electronics to no avail.

10:15 a.m.

Our first break in the pelting distributed denial-of-service storm came midmorning. "Keith,' one of our senior network engineers, was in the main router center when another case of packet loss and missed pings started to occur. He made an on-the-spot decision to start what we fondly call "troubleshooting by yanking cables.'

While it can be extremely effective in isolating a problem, this method of troubleshooting is a last resort. It's a simple procedure: When the network begins to melt, start disconnecting buildings from the router until the problem goes away. Congratulations. You've isolated at least part of the problem. This is an extreme technique and should not be undertaken lightly. If it doesn't work you'll have a lot of explaining to do. Fortunately for us, we struck gold.

We isolated the problems to one section of the network contained in one building on campus.

Probing the net

Isolating the problems was only the first step in what would be a long process.

We still had to determine what was going on - at this point we didn't know we were being attacked - and find out which machines were responsible. The first swell had passed, but we feared that waves like this always come in sets of three.

Noon

As soon as I received word that my staff had localized the problems, I had them deploy one of our newly built intrusion-detection probes on the segment under siege. The probe turned out to be the key that unlocked the puzzle.

Some background: We had evaluated several commercial intrusion-detection systems and found them to be cumbersome and expensive.We also looked at several network management probes and found that while they provided good information on network utilization and traffic trends, they weren't really designed for detecting denial-of-service attacks. So we rolled our own.

We chose a Hewlett-Packard Co. Pavilion 6630 for our base platform. The Pavilion 6630 has a 500-MHz Celeron processor and came with 64M bytes of memory. We upgraded the memory to 192M bytes, giving us a large capture buffer.

When you're working with data rates of more than 100M bit/sec, buffer space fills up fast. For a network interface card (NIC), we chose NetGear's GA-620 Gigabit Ethernet interface.

Don't scrimp on the network card; a dime store NIC can really bog down the system. Remember that the primary function of this system is to grab data off the network and process it quickly. We chose Red Hat Linux 6.2 as the base operating system because it's stable, fast and easily customizable.

To complete the package, we added three crucial Linux applications:

-- Snort, a packet/sniffer logger and lightweight intrusion-detection system.

-- Iptraf, a network monitoring utility for IP networks.

-- Tcpdump, a packet capture and dump program.

A probe is useless unless it is located where it can see all traffic on the network segment being monitored. Our network is completely switched - each port receives only those packets destined for the machine connected to it. So how did we put the probe in the middle of all the activity?

We took advantage of the port-mirroring feature on our Cisco Systems Inc.

Catalyst 6509 switches. Port mirroring lets you configure one port on a switch to echo all traffic destined to, and coming from, another port. While port mirroring is an incredibly useful tool to use in fending off an attack, it can consume significant CPU resources while it's active. We only use port mirroring when absolutely necessary. We use a passive tap - an optical probe or an Ethernet repeater - for long-term monitoring. We connected the probe to an unused port, set the port to mirror the traffic on the segment we wanted to watch, and we were in business.

1 p.m.

As soon as we activated the probe, we fired up Iptraf. Iptraf is an excellent tool for getting a bird's eye view of what's happening on your network by providing statistics on bandwidth, packet size and TCP/User Datagram Protocol (UDP) port usage.

In this case, the port statistics screen immediately alerted us that something strange indeed was happening on the network.

The majority of traffic on a typical network is divided between the most frequently used services: Web, e-mail, FTP and Napster. We saw a completely different picture. Every port below 1024 was in use, had equal byte and packet counts, and the counts were rapidly increasing as we watched. It didn't take a NASA meteorologist to figure out that the tidal wave was about to hit.

While Iptraf is a fantastic "what's happening on the network' tool, it couldn't capture packets and analyze them. Once we realized there were some bizarre traffic patterns on the network, we knew we had to get inside the packets and determine what was happening.

4:45 p.m.

To dig deeper, we called on Snort, a free, open source code application written by Martin Roesch, director of Forensic Systems at Hiverworld in Berkeley, Calif. Snort is a lightweight intrusion-detection system, but don't let the term "lightweight' throw you. This is serious software and is our intrusion-detection code of choice.

Snort uses a flexible rules language that lets users describe what types of traffic should either be captured or ignored. When a rule is triggered, Snort grabs the traffic and writes an alert to a log file. In an ideal world, Snort would have immediately detected that we were being attacked. Unfortunately, this wasn't the case. The initial burst of activity that led to the isolation of the problem only lasted 20 minutes. By the time we had the probe installed and finished using Iptraf, the bad guys took a break. The initial wave had subsided. But another storm was brewing.

3:30 p.m.

About two hours after we installed the probe, our pagers started beeping incessantly. The black hats were back, and Snort had served us well as our early warning system.

With a quick perusal of the Snort logs, we determined that we had been hit by an attack known as distributed denial-of-service shaft. This attack uses a TCP SYN flood technique to overwhelm the target machines. In most cases, the attackers and targets are widely separated. Not so this time. Our crafty friends were using hosts on the same network segment to attack each other. This really posed a problem for us. Confining the attack to one IP subnet made it much harder for us to establish the identity of the machines being used.

Most distributed denial-of-service attacks use a cloaking method known as IP spoofing, which scrambles the IP address of the attacking host. The attacker doesn't need to get data back from the systems it's pounding on, so having a bogus return address isn't a problem - for the attacker. But it was a problem for us because there were more than 300 machines on that network segment, and we didn't have a clue about which hosts had been compromised.

Snort came to our rescue. Using the software to replay network activity from a captured file, we determined that the rogue machines on our network were set in motion by a controlling host in Europe. Further analysis of the packet dump turned up at least three local Unix machines involved in the attack.

4:45 p.m.

The first step we took to block the bad boys was to put ingress/egress filters on our Internet router. Script kiddies that run denial-of-service attacks usually have more than one host they can use to control an attack, so we didn't hold much hope for keeping them out for long, but we got lucky. As soon as the blocks went in, the attacks ended.

We were still left with the daunting task of determining if there were more, as yet unknown, machines primed and ready to be used in another attack. We contacted the interim systemsadministrator of the machines that had been hacked (the department was between system administrators and had someone filling in) and secured permission to scan the network for infected machines.

While our university security person was working with the interim systems administrator to scan the network segment for further infection, several of our network engineers were feverishly working to put in place another probe on our primary Internet and Internet2 feed. We knew the hackers would initiate another wave of bogus distributed denial-of-service requests, and this time we'd be ready.

Noise, noise, noise

Our primary off-campus link is a Gigabit Ethernet connection between a Cisco Catalyst 6509 box and a Cisco 12000 Series Gigabit Switch Router machine.

Tapping into that link would require some specialized equipment. We used a Shomiti Systems optical splitter to peel off a portion of the light stream and direct it to the probe. Using a passive tap (the optical splitter) instead of an active tap (mirror ports) eliminated the possibility of overloading the router by forcing it to duplicate packets during periods of intense network activity.

8:20 p.m.

Our first discovery after activating the new probe was that the supplied rules file for Snort generated a tremendous amount of false alarms when used on the primary network connection. Intrusion detection is definitely an art, not a science, and adjusting the filters can be a complex and ongoing task. We spent several days fine-tuning the rules until we struck a balance between signal and noise. Too loose a filter, and you'll be buried under false alarms. Too tight, and you'll miss important information.

The aftermath

The good news for us was that the scan of the infected network didn't turn up any more contaminated hosts. The router filter appears to be keeping the bad guys at bay. We opted not to pursue the hackers because we were faced with the reality that these guys typically operate 30 levels deep, and because we sustained no lasting damage to our network we felt tracking them down was not worth the time and effort.

Life is returning to normal. We feel safer with a round-the-clock intrusion-detection system watching our Internet feed, but we know that it's only a matter of time before another set of machines, on another segment of the network, stir in the middle of the night, call quietly to one another, and leap to the attack like sharks.

You're not paranoid. They are out to get you. You can go back into the water - just make sure you have your life preservers are within arm's reach.

Here are some cheap, easy and effective steps to keep your network safe:

*Build an intrusion-detection system.

Our system was cheap: We spent less than US$1,000 for the CPU, the gigabit network interface card and the extra memory. The operating system was free, and we had the optical splitter left over from a previous project -you can expect to pay around $500 for this type of tool. Comparable commercial systems can cost $20,000 and can be extremely complex to install and administer.

*Get on good terms with your system administrators.

In many organizations the network team and the system administrators fall under different groups. Don't let politics stop you. Get to know the administrators and make sure they're aware of the dangers. A tightly secured system doesn't make an attractive playground for the script kiddies.

*Constantly emphasize the need for funding for security hardware, software and training.

Don't wait until you've been victimized to ask for funding to obtain security hardware, software or training. Tell your boss every chance you get just how bad it is out there. You might not get funding, but when you get hit -and you will -you'll be on record as having been proactive.

*Read, read, read.

You have a full-time job. The script kiddies don't. To stay ahead of them will require a lot of work. Read everything about security and intrusion detection you can. Subscribe to Bugtraq, and visit the Whitehats.com Web site. Download some of the distributed denial-of-service programs and play with them. Just do it offline - don't attack your own network.

Join the newsletter!

Error: Please check your email address.

More about CiscoHewlett-Packard AustraliaInternet2NapsterNASANetgear AustraliaNICRed HatSEC

Show Comments