Feature: Unix smokes with gigabit Ethernet

Many new servers are shipping with Gigabit Ethernet network interface cards (NIC), ostensibly because they are powerful enough to make good use of such a high-speed network link. But can these boxes really fill a gigabit pipeline? Or do their internal architectures still throttle down their aggregate network throughput to some fraction of a gigabit per second?

Most throughput tests in recent years have shown Unix-based servers outperforming their Windows NT Intel-based brethren. To see how close a server today can really come to using a full gigabit per second, we decided to run network throughput performance tests with a state-of-the-art Unix-based server.

Hewlett-Packard's HP 9000 Model N4000 is big, about the size of two standard refrigerators placed side-by-side. It's powerful, too, and it packs impressive throughput. The system's network I/O is based on 12 independent PCI I/O cards. Ten of the cards are dual-channel cards, each supporting 480M bit/sec of traffic throughput; the other two are single-channel cards, each supporting 240M bit/sec. That's a little more than 5G bit/sec in the aggregate. The unit that HP shipped us came equipped with four Gigabit Ethernet interfaces that share the I/O cards; if any single card fails, all interfaces can still continue to work with minimal throughput degradation.

But we wanted to know: Can the server make sustained use of even one of its gigabit links based on real-world network traffic, such as FTP file transfers?

The short answer: almost. We achieved occasional peak throughput rates up to 812M bit/sec and sustained average throughput rates of 762M bit/sec. These were the server's limits. No matter how many more FTP clients we added or how many FTP download sessions we launched, these rates were the most we could get out of this server over a single Gigabit Ethernet link.

To our knowledge, 760M bit/sec is the most sustained, real-world network throughput gleaned from a single generally available server over a gigabit link reported to date. By real-world conditions, we mean basic, out-of-the-box IP (using the server's supplied IP stack and default IP settings) and FTP file transfers.

We tested the HP 9000 N4000 server with its own Gigabit Ethernet NIC (made for HP by Alteon) connected to a 3Com 9300 Gigabit Ethernet switch. We chose this switch because we knew from previous testing that it handles wire-speed loads on all 12 ports, meaning the switch wouldn't be a bottleneck. We connected five Dell 450-MHz Pentium III NT servers to five of the switch's 12 Gigabit Ethernet ports using gigabit NICs from Phobos and 3Com.

We also knew from past testing that the 3Com switch's SNMP agent doesn't slow down or falter under heavy switch traffic loads. This was key because we used an SNMP monitor to confirm our throughput data - namely, Castle Rock's SNMPc management software, running on another high-performance NT server, connected via a Phobos Gigabit Ethernet NIC directly to the 3Com switch. The only traffic this link carried during the tests was one-second SNMP polls of the 3Com switch's SNMP agent for the interface to which the HP 9000 server was connected.

To create real-world network traffic, we employed FTP file transfer downloads. The HP 9000 was the FTP server, and the NT systems were FTP clients. All FTP-downloaded data received by the NT clients was discarded on receipt, rather than written to hard disk, to avoid a major throughput constraint at the client end. If we omit disk writes, the client's CPU is the only remaining bottleneck on how much data can be pulled down through the network.

We performed straight FTP downloads of very large files, which the HP 9000 cached after the first time they were retrieved from the server's disk. We wanted to ensure that we had data in memory when running the throughput tests. This eliminates disk I/O as a bottleneck on the server side. This is normal: Servers typically cache frequently retrieved Web pages and files.

We built a very large file -about 1GB - for our FTP download test. We had 2GB of RAM on the server, and we confirmed that nearly 1.5GB of that was available for cached files.

FTP download traffic is virtually all maximum-size 1,518-byte Ethernet packets. We noted minimal traffic passing in the reverse direction of the Gigabit Ethernet link during the FTP downloads.

We tested using between one and five NT servers in the role of FTP clients. We added FTP clients, one at a time, with each launching its own FTP download of the same file from the HP server. We did this to see if throughput increased linearly as we added each client. Results showed that throughput reached 292M bit/sec with one NT client and peaked at 762M bit/sec average sustained throughput with four clients. We could not push that throughput any higher, even with the addition of fifth and subsequent clients.

Our conclusions? If you're buying a new Unix-based server, it's probably wise to get it with a Gigabit Ethernet connection, rather than one or more 10/100 ports. Today's Unix servers can use a lot, if not most, of that gigabit pipeline for real traffic. You'll also need to get a switch that has at least one Gigabit Ethernet port or uplink.

Join the newsletter!

Error: Please check your email address.

More about 3Com AustraliaAlteonHewlett-Packard AustraliaIntelNICPhobosSEC

Show Comments

Market Place