In modern MPLS networks with any-to-any connectivity, competition for network resources is fierce, not only among applications and users within each site but also among sites themselves. As a result, the challenge of providing critical application-performance guarantees is growing ever more daunting.
Today, service provider class of service is often used in MPLS networks to address critical application-performance issues. While this technology ensures consistent performance inside the MPLS cloud, it cannot adequately handle the competition among users and applications to access the cloud.
For that reason, many enterprises implement application traffic-management technologies over their MPLS networks to enable a more flexible, per-flow management of the traffic. But these implementations are cost-effective only in hub-and-spoke networks, which are decreasingly prevalent as more networks feature multiple data centers or headquarter locations with significant volumes of application traffic.
Cooperative optimization is a new technology that addresses the cost and complexity of traffic management over large distributed networks with any-to-any connectivity. It can fully handle the competition among users and applications while optimizing usage of network resources without implementing a device in each branch office.
Cooperative optimization relies on a system rather than a box approach to traffic management. In this architecture, devices constantly exchange information about what they see and do using a dedicated communications protocol. The cooperating devices gather statistics about the demand for resources coming from users, what "supply" or traffic handling is needed to deliver a good quality of experience to them, and what the network is capable of delivering -- end to end -- at any given time.
Based on sharing the joint view of these statistics from multiple devices in the network, cooperative optimization computes the optimal traffic-management parameters for each device in a distributed fashion.
The strength of this approach is that it controls the behavior of each traffic flow at the source to optimize the source-site resources using local information, and optimizes the destination-site resources using global information regarding competition among sites -- which is a necessity for achieving consistently good application performance in any-to-any topologies.
To understand how this works, consider a large international car rental company with a non-hub-and-spoke, multiple-star topology (also known as a some-to-any topology). The company has 1,500 rental sites (from large offices at airports to kiosks in small towns), two main data centers and 13 regional data centers.
The most critical application supports the rental process and is hosted at the main data center and accessed by all of the locations. Several other important applications compete with it for resources, including e-mail traffic from the regional data centers.
One traffic problem occurs when a rental branch accesses the rental application at the main data center over the WAN while e-mail is trying to synchronize with a regional data center. The resulting competition between application flows from the two data centers creates congestion at the branch router and impairs the performance of the critical rental application.
Although such competition can be handled with a per-flow traffic-management device located in the branch (as long as the traffic from the main and regional data centers do not contain non-TCP traffic), controlling it on the destination side is not optimal.
Through global management of the network traffic, made possible by cooperation among the devices located at the data centers, the congestion in the branch router can be avoided, even without a device in the branch. The cooperating devices in main and regional data centers exchange information in real time about the flows they are controlling, and from that they detect that they are both sending traffic toward the branch. They dynamically compute the bandwidth that should be given to each user session going to the branch based on their shared knowledge of the traffic mix and of the resources available. They thus effectively prevent congestion in the destination router by controlling the traffic at the source before it enters the cloud.
Cooperative optimization can dramatically reduce the costs and the complexity of application traffic management over large distributed networks by removing the need for appliances in branches. It also reduces ongoing management costs because it enables the system to be configured and controlled from a single point rather than device by device.
Finally, cooperation lets the system respond dynamically to the ever-changing user demand on the WAN so that traffic flows are automatically kept optimal from an application performance perspective. That is why this technology is emerging as the preferred platform for delivering application-based QoS in large enterprise networks.
Lyonnet is director of product management at Ipanema Technologies (www.ipanematech.com)and can be reached at firstname.lastname@example.org.