How to choose between scale-up vs. scale-out architectures for backup and recovery
- 04 December, 2012 16:44
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
There is a lot of discussion in the storage industry in regard to "scale-up" versus "scale-out" architectures for backup and recovery operations. More and more organizations are reducing or eliminating the use of tape by deploying disk-based appliances that use deduplication. But the architectural approach used by the appliance vendor can make a significant difference to the performance, scalability and total cost of the selected solution.
Before discussing the pros and cons of the scale-up and scale-out approaches, let's define the terms:
" Scale-up typically refers to architectures that use a single, fixed resource controller for all processing. To add capacity, you attach disk-shelves up to the maximum for which the controller is rated.
" Scale-out typically refers to architectures that scale performance and capacity separately or in lockstep by not relying on a single controller, but instead provide processing power with each unit of disk.
With either approach a key thing to understand about disk-based backup is that, without deduplication, the economics do not work well against tape. Because many organizations keep weeks, months or even years of backup data, the actual amount of backup data is typically many times the amount of live data in the environment. This makes straight disk too expensive. So combining disk and deduplication is the first step to having an actual product for the backup and recovery market.
On the face of it, scale-up architectures represent a simple premise: Disk plus deduplication creates a backup and recovery appliance that can meet the economics of backup. But backup and recovery is more than just a storage problem. In fact, backup and recovery is:
" A data movement problem -- moving significant data amounts within a pre-defined backup window.
" A data processing problem -- data needs to be processed to be stored in deduplicated form.
" And a storage problem -- deduplication allows for more backup data to be stored in far less disk space.
That said, scale-up approaches offer several advantages: They have a perceived simplicity as they typically only have one computing element where you do configuration and management; in some cases, a scale-up architecture may require less power and cooling; scale-up approaches have been around longer and are more familiar to administrators, generally offering a good feature set and functionality to suit their purpose.
But data growth leads to performance problems in a scale-up architecture, and the reason is simple. Because the architectures include a single computing element that houses all network ports, processor and memory, their performance is limited by the capabilities of that component. As data inevitably grows, only capacity (meaning more workload) can be added until such time that the maximum capacity of that controller is reached.
This leads to two significant problems:
" During the period of data growth, the length of all processes also grows. This includes backup time, deduplication time, replication time and recovery time. Obviously, if you throw more workload at a fixed resource and do not provide additional processing power, it takes longer to complete that work.
" At maximum capacity, you are faced with a fork-lift upgrade to a more powerful controller, which can be costly.
Scale-out architectures handle data growth differently. In a scale-out architecture, each building block of the architecture either does include or can include additional elements of performance, including network ports, processors, memory and, yes, disk. As a result, as data grows and capacity is added, processing power is also added.
This means data growth does not lead to longer times for backups, deduplication, replication and recovery. If the workload is quadrupled, the processing power of the architecture is also quadrupled. And there is no "maximum capacity." While vendors may limit how many devices can coexist in a singly managed system, there is never the need for a forklift upgrade as devices can continue to be added individually, even if that means starting a "new system."
Another difficulty found with the scale-up approach relates to system sizing. Many scale-up vendors offer a variety of controller sizes, meaning controllers can handle different amounts of maximum disk. And as you would expect, more powerful controllers that allow for more capacity come at a higher cost. So as a purchaser of this approach, a customer has to decide whether to purchase a controller that can handle a larger environment than currently needed, or purchase a smaller controller knowing they will reach maximum capacity sooner.
Scale-out approaches avoid the system sizing problem because of the modularity of the architecture. Customers can right-size their purchase to the current environment plus reasonable growth. Then as data grows, more building blocks can be added as needed without concern for a forklift upgrade. This makes the upfront purchase potentially more cost-effective and avoids costly upgrades downstream.
A final argument against the scale-up approach is technology obsolescence. IT professionals are all too familiar with the concept of buying a new product only to find it reaching end of life shortly after purchase. This problem is exacerbated when you make the decision to buy a larger controller that allows for greater expansion runway. The controller causes you to lock in to the then current technology. And as the vendor releases a controller based on newer technology, the only way to leverage it is to go through another forklift upgrade.
Scale-out approaches may avoid this (depending on the vendor) by allowing users to mix and match different generations of building blocks in the same system. Assuming the vendor guarantees the hardware can be upgraded to the latest and greatest software, you can avoid the need to rip and replace expensive components to take advantage of the vendor's newest offerings.
There are a number of vendors that offer scale-up approaches to disk-based backup and a number that offer scale-out. As you decide, weigh the perceived simplicity and familiarity of scale-up with the technical and economic benefits if scale-out approaches.
ExaGrid Systems is the leader in cost-effective scalable disk-based backup solutions with data deduplication.
Read more about data center in Network World's Data Center section.
Join the Computerworld Australia group on Linkedin. The group is open to IT Directors, IT Managers, Infrastructure Managers, Network Managers, Security Managers, Communications Managers.
As Unix fades away from data centers, it's unclear what's next
UPDATED: 4G in Australia: The state of the nation
Online backup service SugarSync moves to paid-only model
Updated: NBN Co releases strategic review
Updated: NBN Co releases strategic review