Web caching

Web caching is the practice of storing frequently requested -- but infrequently changed -- pages, images and other Web objects on a nearby server or even a user's PC.

Information from computers all over the world is available on the World Wide Web. But what happens when a server -- or even an entire network -- gets too many requests? Performance takes a major hit on that network.

To avoid this, one could buy more servers. But a more efficient way to increase server capacity -- especially if the information is relatively static -- is to store (or cache) copies of the data on servers in different locations around the Web. Then, when a request comes in for a particular Web page, it can be redirected to a server that's closer to the requester, so the final delivery of the page doesn't have to travel through so many different segments of the Web.

That's one type of Web cache in which the host is responsible for the caching, even if it is outsourced to a third party such as Cambridge, Mass.-based Akamai Technologies Inc. or Foster City, Calif.-based Inktomi Corp.

Why Caching Matters

In the early days of the Web, when network traffic was much lower, caching wasn't as important. But with hundreds of millions of new computers coming onstream every year and most of them using the Web, caching helps improve quality of service for everyone, provides protection against network surges and reduces overall network traffic.

Sometimes, the Internet service provider is responsible for Web caching. This type of caching could be useful in situations where the same file is requested many times. A good example is a logo that appears on all of a company's Web pages: Each time a user clicks on one of that company's pages, that same logo graphic is called for.

Let's say the company is Google Inc. All of the people at all of the networks that hook into the same Internet service provider could request the Google logo thousands of times a day. Relying on Google to have remote cached sites may be helpful in keeping traffic manageable on Google's network, but it does nothing for the Internet service provider. The provider's servers have to handle all those requests, and caching can come to the rescue. The provider keeps track of what pages and files are being requested and stores local copies of those asked for frequently. When I click on Google.com, the logo comes not from a Google server in Mountain View, Calif., nor from one of Google's outlying cache servers. Instead, my Internet service provider just sends me what it has stored.

Extend this one step further, and it's likely that one of a corporate network's own servers is caching the Google logo. These types of caches are called proxy caches.

Because proxy caches serve a large number of users, they're quite effective at reducing latency and traffic. That's because popular objects are requested only once, and served to a large number of clients.

Finally, each individual Internet browser does a certain amount of caching right on the end user's workstation.

The effect of all this caching is to speed up access. A page loads faster if you go back to it within a short period of time because your PC already has a copy of it tucked away and can deliver it lickety-split.

How does it work? A user in, say, London requests a page from a site whose server is in Tokyo. That request may have to travel through a chain of dozens of network routers, and the overall speed of that request (and its response) depends on the slowest link in the chain. If one of those intermediate routers is overloaded, it starts dropping packets and asking for them to be retransmitted. This slows down that piece of traffic even more.

If a cached copy is closer to the browser, the requested content has to pass through fewer routers, reducing the potential for delays and speeding up service. Caches that minimize the distance that data must travel also reduce transmission costs.

Time Out

But what about changes? Most of the files and Web pages stored in a cache don't stay the same over time. Thus every time it gets a request, the cache has to check to determine if its copy of the requested page is "fresh" (meaning it hasn't reached its expiration date). If it's fresh, the cache server can then serve it directly. Only if the object is out of date does the server need to request a new copy from the originating server.

Not every Web object can or should be cached. The best candidates are the most requested, largest objects (especially images) that aren't likely to change very often -- that is, object with what's called a long "freshness" time.

An object's headers can be coded in such a way that cache servers won't cache them. Also, authenticated or secure objects can't be cached, and neither can script results.

Kay is a freelance writer in Framingham, Mass. Contact him at russkay@charter.net.

SIDEBAR

Tips for Building a Site That's Cache-Aware.

Besides using freshness information and validation, there are a number of other things you can do to make your site more cache-friendly.

- Be realistic and assign freshness information accurately.

- Refer to objects consistently.

- Use a common library of images and other elements.

- Make caches recognize regularly updated pages by specifying an appropriate expiration time.

- If a resource changes, change its name.

- Don't change files unnecessarily.

- Use cookies only where necessary.

- Minimize use of Secure Sockets Layer.

Join the newsletter!

Error: Please check your email address.

More about Akamai TechnologiesGoogleInktomiOnStream

Show Comments

Market Place