But Implementation is Still Fuzzy

FRAMINGHAM (03/13/2000) - Video streaming and videoconferencing improve communication and collaboration between geographically dispersed employees, partners and customers while reducing the time and costs associated with logging traditional "face time."

But preparing a corporate IP network to support these emerging, bandwidth-intensive technologies doesn't happen overnight.

Before introducing interactive real-time video, you, as a network executive, need to take several steps to make the picture show run smoothly, including: piloting streaming video in your own network to get your feet wet; tweaking your network topology to accommodate the increased bandwidth you will need to deliver business-quality video; and implementing quality-of-service (QoS) standards that give priority to synchronized audio and video traffic without shortchanging data-only applications.

Start with streaming

While only 10 percent to 15 percent of enterprise networks today are dabbling with video over IP, most IT officers report they do it to increase the effectiveness of internal business communications. These companies are streaming media to selected desktops to enrich online learning and expand virtual audiences for corporate briefings.

To get to this point, you first need a 10/100 switched Ethernet LAN and clients with software capable of playing video and audio files. Now you're ready to send digital media content in User Datagram Protocol (UDP) streams across LAN, WAN and metropolitan-area networks, according to bandwidth availability and priority.

In most streaming media deployments, content is captured in advance of a broadcast, edited, and encoded by media production professionals. Video servers then either broadcast a live stream or can store content for users to access on demand.

The trick to upgrading to interactive videoconferencing is adjusting the infrastructure for real-time traffic. For collaborators whose LANs access a common corporate or service provider backbone, reducing network congestion is the key goal. Ideally, a network engineer will use a three-pronged approach to reducing congestion and ensuring predictable, business-quality real-time video.

These prongs include:

* Reworking network topology to provide ample bandwidth.

* Implementing QoS standards on your network and making sure your WAN carriers do so as well.

* Taking measures to shape traffic, if and when congestion occurs.

Increasing bandwidth and reworking topologyThe bandwidth "sweet spot" for enterprise video communications is in the 300K to 400K-bps stream range. Because the audio and video streams are bidirectional, the aggregate demand on network resources is at least 600K to 800K-bps per live session.

The standards for videoconferencing over IP do not require that the two or more end points in a session send the same data rate they receive. A low-powered end point may only be able to encode at a rate of 100K-bps, but because decoding is less processor-intensive, it could decode a 300K-bps video stream.

Overhead for IP packet headers must also be taken into consideration. For example, a bidirectional 384K-bps videoconference will consume approximately 425K-bps in each direction, or a total of 850K-bps of bandwidth on a LAN with ample headroom. Where circuit-switched networks are used for wide-area connectivity, data rates are commonly expressed in full-duplex terms. In other words, a T-1 offers 1.5M bit/sec in each direction and would be ample bandwidth for a single videoconference. Alternatively, three Basic Rate Interface ISDN lines can support a 384K-bps videoconference with ample headroom.

With regard to topology, the objective is to reduce the overall latency and odds of packet collisions. There is debate about the best topology to accomplish this. Frik Strecker, director of Integrated Internet Video Application Services with FVC.COM in Santa Clara, recommends that you aggregate video end points onto common Ethernet switches and provision them each with Gigabit Ethernet.

Under this design, data-only end points are removed from video-only switches and connected to Ethernet switches with nonvideo end points. This involves building out parallel networks, and although that can be costly, it will eliminate all risk that video and data will battle for bandwidth in the local loop. Minimal IP readdressing may be required if Dynamic Host Configuration Protocol or BOOTP is not used to assign IP addresses.

To handle real-time applications, video-only switches should be nonblocking and may need high-capacity buses for minimum buffers and latency. Precautions at the level of a video network gatekeeper may be put in place to prevent multiple video calls from going through a switch at the same time. The caller who exceeds the switch throughput will encounter the equivalent of a busy signal.

Alternatively, you can mix a few video end points and add data-only end points on a given switch to reduce the possibility of having too many simultaneous videoconferences passing through a switch, avoiding potential overload. The logic is that common data applications such as Internet browsing and e-mail only produce intermittent "well-behaved" TCP/IP traffic that can mix without interfering with the continuous UDP traffic generated by the video applications.

When network congestion begins and packets are being dropped, the data applications begin to resend lost packets. As packet loss increases, TCP/IP applications increase the interval between packets, reducing their relative impact on the interactive video session. The greatest risk of this deployment is that as traffic congestion at the switch increases, mission-critical TCP/IP applications back off so far that they time out and eventually terminate.

Network-intensive applications such as manipulating large databases or drawings (for example, CAD/CAM) are more tenuous. Streaming video servers, which like interactive video applications use UDP for transmission, do not back off when congestion arises because they do not detect packet losses. Streaming applications, therefore, tend to have a high impact on interactive video application quality.

It is also important to keep congestion off internetworking routers. Using a network analysis tool such as Ganymede Software Inc.'s Chariot can nip problems in the bud. A network administrator can initiate sessions and see the traffic in all segments between the sender and receiver end points. The application quickly identifies where bandwidth is limiting or where buffers are not adjusted correctly to manage real-time data needs. Engineers can then take necessary steps to upgrade software or increase memory as needed.

But when there is congestion due to peak usage or bursty traffic that can't be addressed by increasing the raw bandwidth, you will certainly want to implement QoS and traffic-shaping technologies.

QoS provisions: Locally and wide area

Differentiated Services (Diff-Serv) and IP Precedence are two IETF protocols that from within a router or a switch, enable QoS by reading information contained in the type-of-service (ToS) byte of the packet header.

IP Precedence uses priority values to enable the switches and routers to sort packets based on this priority. Eight different priority values can be set on a particular application in the end point.

Diff-Serv, on the other hand, assigns QoS classifications to traffic from different applications based on service-level agreements between users and service providers. Currently two service levels are defined - assured and premium (expedited). Because Diff-Serv aggregates flows into these two categories, it is considered by many to be more scalable than Resource Reservation Protocol, which secures QoS on a per-flow basis.

When available, IP Precedence and Diff-Serv will not need to be present in all end-to-end routers to benefit video communication applications. Any routers in your network that support IP Precedence will prioritize packets whose value is set by the end point for highest priority in the packet header.

Routers that aren't configured for IP Precedence will give best effort service to all packets. Network gear vendors are expected to have these protocols implemented fully in the next 12 to 24 months. Currently, VCON is the only videoconferencing vendor that can set IP Precedence or Diff-Serv within its end point application. However, Windows 2000 lets you configure a network client (end point) to assign a ToS class to its IP address.

In addition to implementing QoS intelligent features in applications and routers on your network, you'll have to get that same support from the carriers that provide your WAN links. Although major service providers have long been promising QoS support in their networks, few are using the necessary router-based protocols today.

Policy server-based prioritization is the mechanism we used in our tests with Nortel Networks routers. Few enterprise networks have implemented this approach internally. However, when we get to the point in which traffic will be charged according to the class of service provided to a specific address, this could become an attractive solution for network service providers.

In spite of your best efforts to avoid network congestion, you will still want to have a plan to address congestion at the videoconferencing end point. To shape video traffic in IP environments, manufacturers of video encoders will increasingly take advantage of adjustments they can make to video compression.

Video compression algorithms rely heavily on quickly coding data similarities and differences within and between successive frames of video. The compressed data is sent onto the network as fast as the end point's network adapter allows. The resulting burst of UDP/IP traffic into the network once or twice per second at line-rate speeds is likely to produce congestion downstream.

To complicate the matter even more, jitter (uneven video displays due to the variation in delay between packets and frames) can be introduced if the time in which a full or partial frame of video is buffered on the network varies. When network congestion occurs, buffering increases and higher than acceptable jitter is experienced. In order to remove jitter introduced by a network, receive-side buffers (jitter queues) are required at the receiving video end point. This also introduces a delay. Increased jitter requires a larger jitter queue, resulting in longer delays for which you may compensate by using more powerful and expensive video end points.

When selecting a video-enabled end point for use in an IP environment, we recommend you ask about each manufacturer's traffic-shaping algorithms. Are the end points you are evaluating capable of monitoring network conditions and making adjustments? For example, do the encoders reduce the rate at which they introduce data on the network when they sense increasing network congestion?

How does the application control the rate at which the product puts video onto the network? Can it correct for network jitter? How does a receiving end point process out-of-order or duplicate packets it receives before decoding them?

Packet scheduling is a sender feature that buffers audio/video packets once they are compressed and then injects them into the network at a steady pace.

This is effective in reducing congestion, but the downside to packet scheduling is that it can be computationally demanding. A video-encoding end point must have at least a 400-MHz processor in the CPU or use digital signal processors to manage the scheduling.

During a videoconference, packets arrive at an end point's network interface out-of-order due to the connectionless nature of IP networks. Without packet-ordering software to sort through and drop out-of-order packets, the end user will detect visual artifacts such as blocks of misplaced colors in the video or pops in the audio. We found that end points managed packet arrival irregularities differently. Some applications are designed to freeze the video completely until the next full frame can be created free of artifacts. Others display frames with "ghosts" (like a trail produced by a rapid movement in a strobe light) of images from previous frames. Out-of-order packets also contribute to a lack of lip synchronization on the screen with audio. The only business-quality end points we tested with any "IP intelligent" features noted here were developed by VCON.

Our last recommendation for implementing videoconferencing is finding a systems integrator with experience in deploying video on IP networks.

Join the newsletter!

Error: Please check your email address.

More about Ganymede SoftwareIETFIP ApplicationsNortel NetworksOnline LearningProvisionSEC

Show Comments