Readers often express concern over the comparative failure rates of serial ATA (SATA) and serial-attached SCSI (SAS) or Fibre Channel disk drives. Should buyers be put off if SATA's failure rate is less than six years, while the failure rate of a SCSI or Fibre Channel device may be up to 15 years? After all, do you really keep drives in-house for 15 years? Or even for five? Of course not.
Almost no one has 12-year-old disk devices on the IT floor today. The progress of technology, in addition to making disks faster and more compact, has also made disk devices and their cabinetry more reliable, more serviceable, and cheaper to maintain. And they are, of course, much more manageable.
The result is that older products, even if they can still spin like a top, cease to be economically viable.
So if we don't keep disk devices until they fail, is the difference in "hardiness" between SATA and SCSI devices really a significant one? Yes, but read on.
Interested readers should keep in mind that the comparison here between the mean time between failure (MTBF) of SATA and SCSI (or Fibre Channel) drives is not just quantitative, but is also qualitative. In this case, this means that while the test environments for the SCSI and Fibre Channel disk devices are essentially similar; the SATA test is significantly less stressful in terms of duty cycle. Simply put, tests for SAS and Fibre Channel work the drive much more than does the test for SATA.
This means that SAS and Fibre Channel devices can be expected to last much longer.
Is that difference significant? Yes. Should it dissuade anyone from using SATA devices? Absolutely not!
SATA devices, like all hardware, will fail at a predictable rate, and the unfortunate reality is that SATA devices, like all spinning media, will eventually all die. But compensating for what in effect is assured failure is one of the reasons we have RAID, and is the reason some managers spend a few extra bucks to buy devices with hot spares.
The important point here is that if we are talking about disks inside an array, no one should be scared away from SATA just because of the lower MTBF.
Certainly there are lots of other dissimilarities between SCSI and SATA (lessened vibration of each device inside the array, for example, and lower power consumption are two of the more obvious ways SCSI devices tend to outshine SATA), but individually (and even collectively) none of these is a showstopper.
IT managers should look at all of these issues, and then think about price, vendor relationships, and any number of other things. Is it going to be cheaper, even with more frequent failures, to go with SATA? That depends on your needs. Whatever those needs might be however, it's a good bet that spending a few hours with a spreadsheet rather than with a spec sheet will frequently give you the proper insight. Then, with most of the issues reasonably well understood, enterprise IT managers can make decisions based on the economies of IT, and not just on technology.