When it comes to deciding what RAID levels to configure, it’s important to understand how parity impacts performance. Most everyone who works in IT is fairly familiar with the standard seven RAID levels used, but just to make sure that baseline is established, let’s go over them very quickly:
RAID Level | How it works | Example of 12 drive configuration | Example of 10 drive configuration drive tolerance |
RAID 0 | Stripes data across all disks. | 1x 12 drive stripe (12 drives space is usable) | Lose any drive and it’s all gone. |
RAID 1 | Data mirrored between two disks | 6x separate 2 drive mirrors (6 drives space is usable) | 1 Drive failure in each RAID – up to 3 (if the right ones fail) |
RAID 5 | Data striped across N-1 disks with N being used for parity. Which disk stores the parity is rotated per stripe. | 1x 12 drive (11 drives space is usable) | 1 Drive failure |
RAID 6 | Data striped across N-2 disks with N₁ being used for the first calculated parity and N₂ being used for the second calculated parity. Each parity is calculated in turn and separately. Which disk stores N₁ and N₂ is rotated (each to a separate disk). | 1x 12 drive (10 drives space is usable) | 2 Drive failures |
RAID 10 | Data is striped across multiple RAID 1s. | 1x 12 drive [6 mirrors that are striped] 6 drive usable | 1 drive failure per RAID 1 (up to 6 drive failures if the right ones fail) |
RAID 50 | Data is striped across multiple RAID 5s. | 1x 12 drive (2x RAID 5 of 6 drives) 10 drives usable | 1 drive failure per RAID 5 (up to 2 drives failures if the right ones fail) |
RAID 60 | Data is striped across multiple RAID 6s. | 1x 12 drive (2x RAID 6 of 6 drives) 8 drives usable | 2 drive failure per RAID 6 (up to 4 drives failures if the right ones fail) |
Well, judging from that chart something like a RAID 5/6 or a RAID 50/60 would be what I wanted, right? It gives me a high tolerance for drive failures without losing too many drives to protection. Of course, there are other considerations, specifically performance.
Let’s go through the same chart again, except on the performance side. We’ll assume some industry-generic numbers for IOPs: 125 IOPs for a 10k drive. We’ll also assume our previous 12-drive configurations, and we’ll assume we have zero caching on our server, our RAID card or our drives. Further, we assume our RAID controller has unlimited horsepower to do the parity math. We’re focusing purely on the RAID overhead here.
RAID Level | Pure drive performance | Worst case write IOPs | Best case write IOPs | Worst case read IOPs | Best Case read IOPs |
RAID 0 | 1500 | 1500 (100%) | 1500 (100%) | 1500 (100%) | 1500 (100%) |
RAID 1 | 1500 | 125 each RAID set (8.3%) | 125 each RAID set (8.3%) | 250 each RAID set (16.6%) | 250 each RAID set (16.6% |
RAID 5 | 1500 | 375 (25%) | 1375 (91.6%) | 1375 (91.6%) | 1375 (91.6%) |
RAID 6 | 1500 | 250 (16.6%) | 1250 (83.3%) | 1250 (83.3%) | 1250 (83.3%) |
RAID 10 | 1500 | 750 (50%) | 750 (50%) | 1500 (100%) | 1500 (100%) |
RAID 50 | 1500 | 375 (25%) | 1250 (83.3%) | 1250 (83.3%) | 1250 (83.3%) |
RAID 60 | 1500 | 250 (16.6%) | 1000 (66.6%) | 1000 (66.6%) | 1000 (66.6%) |
So, that paints a pretty different story. Let’s walk through the source of these different numbers.
RAID 0 – is just a stripe across everything, you get the full performance for everything. It would be amazing… if a single drive loss didn’t lose ALL your data.
RAID 1 – is several different mirrors — each of which is capable of very little IO — with writes being mirrored, all writes suffer a 50 percent degradation on theoretical performance. However, reads can come from both drives as if they were striped, giving it 100 percent read performance.
RAID 5 – For every full stripe write (net new write, or a full rewrite of the entire stripe), data is written to N-1 drives, and then a parity is calculated and written to the last drive. Maximum write performance is N-1 drives. If there isn’t a full stripe write though, the previously written data has to be read, previous parity read, the new stripe written, and then parity written. (This is an oversimplified explanation if you actually understand all the underlying parts of RAID 5 algorithm, but it is way too deep to explain here.) So, we suffer a 75 percent degradation of overall write performance because it takes four operations for every write. Reads are N-1 because only N-1 disks contain data (parity is read and checked though).
RAID 6 – For every full stripe write, data is written to N-2 drives, parity is written to a parity drive, and then a completely separate parity is written to another parity drive. When we don’t have a full stripe write though, we have to read the data and both parities, write our new data, write parity 1, then write parity 2… so we suffer an 83.3 percent degradation in performance as we have to do six operations. Reads are simply N-2.
For RAID 10, 50, and 60, we simply put together our previous RAID sets to get our overall performance.
So why would we use certain RAID levels?
RAID 5 gives a large percentage of capacity with protection. As long as writes are full stripe, the RAID penalty isn’t that bad either; but we only have one parity, which can cause long term data reliability issues.
RAID 6 gives a large percentage of capacity with better protection, but at the cost of more performance. It is sadly becoming more and more necessary as drives get larger and larger though, and the possibility of a secondary failure grows as does the rebuild time and the chance of bit-rot.
RAID 10 gives very predictable performance, minimal variability at the cost of a lot of our storage.
RAID 50 gives larger volumes while allowing the RAID controller to have stripes across a smaller number of disks.
RAID 60 gets the same reason as RAID 50, except with extra protection.
In reality, there are a LOT of other factors.
These reasons are why a lot of people go to RAID 50 or RAID 60s rather than large RAID 5/6.
All of these concerns still apply on SANs too; they just often have more powerful RAID controllers and are better at hiding the implications.