Definitions of scalability range from the academic to the practical. In theory, a highly scalable system will deliver neatly uniform increments of performance as each processing unit is added. That is, growth in a near-linear fashion. The real world doesn't work that way, not even close. In the IT shop it's all about keeping things going, creating a system to deal with spikes in demand and to handle the ever-evolving user workloads. Growing pains yes, but major reconstructive surgery, no.
Depending on the nature of your business, the challenge of scalability can be met with crossed-finger nonchalance, the brute force of over-provisioning everything, or the finesse of carefully bench testing operating systems, applications and hardware under simulated load conditions. Analysts from Meta Group, IDC and Ideas International discuss such definitions, the danger of being caught out on the planning front and other growth issues in our Roundtable.
But being able to bench test the real world would be luxury for organisations like Hostworks, a service provider, which hosts some of Australia's busiest Web sites including MSN and Ticketek. Because its customers are unable to aggregate the volume of traffic their customers are going to do, it over builds its systems to very high levels. Managing director Marty Gauvin has 'build rules' which call for capacity to handle double the highest ever peak on it primary system and the same the back up -- delivering four times peak capacity. From a budgeting perspective for Gauvin, scalability means an application can continue to cope with additional load and perform effectively with no worse than a linear increase in cost. This ideal is thwarted because multivendor platforms don't work or aren't priced this way.
"The platform might have a database from Microsoft or an operating system from someone else, [and] we might have to put in three servers to get double the performance," he says.
The other problem, Gauvin notes is that the system admin workload might increase dramatically with growth as "we shuffle things around manually a lot more".
Ben Wrigley, the IT manager at the Inter-Continental Hotel has suffered growing pains with the capacity of hotel-specific custom software. But this seems to be a problem unlikely to go away in the near term. While large-scale packaged solutions may be put through the rigours of thorough scalability testing, custom solutions will always be needed within industry specific niches. Meanwhile, KPMG banking and finance analyst Alok Chakravarti says managing scalability is getting easier. He recalls the old days of "big jumps" when the organisation went from one machine and system to another, but now enjoys a "much smoother passage" as capacity is added in smaller increments.
Running in parallel with scalability is the matter of availability. According to Meta Group analyst Dr Kevin McIsaac, it's time to ditch the talk of "five or six nines" availability and move onto more meaningful measures like mean time between failure and mean time to repair. It's a business call to decide how much you need to invest in order to beef up these availability measures. Finding the right investment level entirely depends on whether you're running the shop for the medical supplies company Eli Lilly, for Hostworks or for Queenland Health.
Pump too hard and you'll burst the budget. Not enough, and you'll be flattened.