Common knowledge says I/O operations are the slowest link in your performance chain. After all, CPUs are still many times faster than storage devices; hence, the average business application will likely saturate storage bandwidth resources while the processors still have capacity to spare. Right?
Not quite. Networked storage puts an additional strain on servers' processors that all but eliminates the performance gap between the two components. A typical example is the additional processing of TCP/IP packets that CPUs have to handle for IP based storage network.
According to Adaptec Inc. benchmarks, a GbE (gigabit Ethernet) pipe working at full capacity generates enough additional processing to keep a 1GHz processor occupied full time. So, Adaptec and companies such as Alacritech are offering TOEs (TCP Offload Engines) -- essentially a GbE NIC (network interface card) with an on-board processor to relieve the CPUs.
But a TOE does not address other potential burdens on CPUs, such as the IPsec encryption mechanisms (prescribed by the iSCSI draft), nor does it remove the need for multiple copies of data buffers.
Let's explain. A snippet of data typically moves through three different memory locations inside a server: the application buffer, the OS buffer, and finally the memory of the network adapter that sends the data over the wire. Incoming packets follow the same path, only in reverse order.
Moving data around to the frenetic tempo of a GbE connection uses up time and server resources. Moreover, the original data often needs to be broken into smaller chunks, which creates additional overhead due to the protocol headers.
Companies such as Adaptec, Broadcom Corp., Hewlett-Packard Co., IBM Corp., Intel Corp., Microsoft Corp., and Network Appliance Inc. find the overhead problem so disturbing that they have founded the RDMA Consortium to research a vendor-neutral solution.
RDMA, or "Remote Direct Memory Access," is both the name and objective of the consortium: Create a mechanism to transfer data directly between the memory of two network cards over TCP/IP, eliminating redundant copy operations and forming a zero-copy environment. It may be speculation now, but RDMA researchers estimate that without zero copy, the forthcoming 10GbE pipe will put unbearable strain on even the most advanced and powerful servers.
Startups have also joined the consortium: Silverback Systems Inc., Seaway Networks Inc., Trebia Networks Inc., Astute Networks Inc., and iReady Corp. are working on storage processors. Look for traditional silicon vendors to get in this game soon, too. Agilent already has: Last month, the company licensed Astute Networks technology prior to its company launch. Meanwhile, National Semiconductor Corp. has invested in iReady. At least somebody is going to make some money off this problem. Eventually.