Five ways to improve Web site uptime

You can't fix the Internet, but you can tap these tools to decrease downtime for your organization's Web site

When the site GDGT.com went live this past summer, Ryan Block was expecting a lot of interest.

Prior to launch, the former Engadget.com editor in chief had built up momentum for the site -- which allows everyday users to write gadget reviews -- by informing bloggers and online publications. "We were excited but wary, because there's always an x factor," says Block. "We did weeks of performance and load testing, but lab testing will always differ from real-world usage, and we knew there would still be issues here and there that we wouldn't find until thousands of people were actually using the site."

Indeed, on Aug. 4, GDGT went live -- and a few hours later Block was forced to post a message explaining that the site was not available because of unanticipated levels of interest, which included thousands of users signing up for accounts and visiting the home page. Block says the problem was related to database performance.

Joe Skorupa, a Gartner Inc. analyst, says GDGT experienced what he calls "catastrophic success" -- an unusual surge in traffic that can bring a Web site to its knees. Its seems like there's another story about a site experiencing colossal failure every week: a Twitter outage, Facebook downtime or Gmail problems. (Twitter Inc., Facebook Inc. and Google Inc. representatives all declined to comment on outages.)

Skorupa says there is a common misunderstanding about the public Internet -- which is notoriously flaky and consists of many interconnected networks. It's not the same as corporate cloud computing, private networks that occasionally use the public Internet, or financial services on the Web, which are mandated to be available 24/7. In his view, the public Internet should not be viewed as being as reliable as, say, a private connection between bank offices.

Key uptime tips

Jason Mayes, senior Web development engineer at XMOS Ltd., offers his own top 10 list for dealing with site congestion and other potential server outage problems:

  • 1. Optimize your static content. Compress images to get every last kilobyte out of them while retaining visual quality.

  • 2. Use Minify (a PHP5 app) for your CSS and JavaScript to compress Web data, and put JavaScript at the end of the document where possible.

  • 3. Add "expires" headers to content to prevent browsers from continually downloading the same files as a user browses your Web site.

  • 4. Ensure that your Web server delivers content in a compressed state -- for example, mod_deflate for Apache. Clearly this should not be applied to files such as images -- which are already compressed -- so make sure you set up rules correctly.

  • 5. Make fewer HTTP requests to fetch your Web site. Combine CSSs into one file. Combine JavaScripts into one file where possible. Include these files only on the pages where they are required.

There is also a misunderstanding about a site "going down." Typically, a server has not crashed entirely, it's more likely a data center problem, says James Staten, a Forrester Research Inc. analyst.

"A service doesn't go down, but gets so slow that it's viewed as nonresponsive," says Staten. "Load balancers take all incoming requests and route them to the Web servers based on their responsiveness. This architecture can become unresponsive when it's overwhelmed by the number of requests for content."

Join the newsletter!

Error: Please check your email address.

Tags Gartnerhtml

More about Akamai TechnologiesAkamai TechnologiesAmazon.comAmazon Web ServicesApacheCMSCommunications InternationalFacebookForrester ResearchGartnerGoogleHewlett-Packard AustraliaHPIEEEMicrosoftNetflixOracleQwestQwest CommunicationsVerizonVerizon

Show Comments
<img height="1" width="1" style="border-style:none;" alt="" src="//insight.adsrvr.org/track/evnt/?adv=bitgblf&ct=0:jkis3bum&fmt=3"/>