Parity Mirroring is a data redundancy technique used by some RAID levels, in particular RAID level 1, to provide data protection

on a RAID array. While mirroring has some advantages and is well-suited for certain RAID implementations, it also has some limitations. It has a high overhead cost, because fully 50% of the drives in the array are reserved for duplicate data; and it doesn't improve performance as much as data striping does for many applications. For this reason, a different way of protecting data is provided as an alternate to mirroring. It involves the use of parity information, which is redundancy information calculated from the actual data values. You may have heard the term "parity" before, used in the context of system memory error detection; in fact, the parity used in RAID is very similar in concept to parity RAM. The principle behind parity is simple: take "N" pieces of data, and from them, compute an extra piece of data. Take the "N+1" pieces of data and store them on "N+1" drives. If you lose any one of the "N+1" pieces of data, you can recreate it from the "N" that remain, regardless of which piece is lost. Parity protection is used with striping, and the "N" pieces of data are typically the blocks or bytes distributed across the drives in the array. The parity information can either be stored on a separate, dedicated drive, or be mixed with the data across all the drives in the array. The parity calculation is typically performed using a logical operation called "exclusive OR" or "XOR". As you may know, the "OR" logical operator is "true" (1) if either of its operands is true, and false (0) if neither is true. The exclusive OR operator is "true" if and only if one of its operands is true; it differs from "OR" in that if both operands are true, "XOR" is false. This truth table for the two operators will illustrate:
Input #1 0 0 1 1 #2 0 1 0 1 "OR" 0 1 1 1 Output "XOR" 0 1 1 0

Uh huh. So what, right? Well, the interesting thing about "XOR" is that it is a logical operation that if performed twice in a row, "undoes itself". If you calculate "A XOR B" and then take that result and do another "XOR B" on it, you get back A, the value you started with. That is to say, "A XOR B XOR B = A". This property is exploited for parity calculation under RAID. If we have four data elements, D1, D2, D3 and D4, we can calculate the parity data, "DP" as "D1 XOR D2 XOR D3 XOR D4". Then, if we know any four of D1, D2, D3, D4 and DP, we can XOR those four together and it will yield the missing element.

Let's take an example to show how this works; you can do this yourself easily on a sheet of paper. Suppose we have the following four bytes of data: D1=10100101, D2=11110000, D3=00111100, and D4=10111001. We can "XOR" them together as follows, one step at a time: D1 XOR D2 XOR D3 XOR D4 = ( (D1 XOR D2) XOR D3) XOR D4 = ( (10100101 XOR 11110000) XOR 00111100) XOR 10111001 = (01010101.XOR 00111100) XOR 10111001 = 01101001 XOR 10111001 = 11010000 So "11010000" becomes the parity byte, DP. Now let's say we store these five values on five hard disks, and hard disk #3, containing value "00111100", goes el-muncho. We can retrieve the missing byte simply by XOR'ing together the other three original data pieces, and the parity byte we calculated earlier, as so: D1 XOR D2 XOR D4 XOR DP = ( (D1 XOR D2) XOR D4) XOR DP = ( (10100101 XOR 11110000) XOR 10111001) XOR 11010000 = (01010101 XOR 10111001) XOR 11010000 = 11101100 XOR 11010000 = 00111100 Which is D3, the missing value. Pretty neat, huh? :^) This operation can be done on any number of bits, incidentally; I just used eight bits for simplicity. It's also a very simple binary calculation--which is a good thing, because it has to be done for every bit stored in a parity-enabled RAID array. Compared to mirroring, parity (used with striping) has some advantages and disadvantages. The most obvious advantage is that parity protects data against any single drive in the array failing without requiring the 50% "waste" of mirroring; only one of the "N+1" drives contains redundancy information. (The overhead of parity is equal to (100/N)% where N is the total number of drives in the array.) Striping with parity also allows you to take advantage of the performance advantages of striping. The chief disadvantages of striping with parity relate to complexity: all those parity bytes have to be computed--millions of them per second!--and that takes computing power. This means a hardware controller that performs these calculations is required for high performance-if you do software RAID with striping and parity the system CPU will be dragged down doing all these computations. Also, while you can recover from a lost drive under parity, the missing data all has to be rebuilt, which has its own complications; recovering from a lost mirrored drive is comparatively simple. All of the RAID levels from RAID 3 to RAID 7 use parity; the most popular of these today is RAID 5. RAID 2 uses a concept similar to parity but not exactly the same.

RAID Performance Issues
RAID was originally developed as a way of protecting data by providing fault tolerance; that's the reason for the "R" at the front of the acronym. Today, while matters of reliability, availability and fault tolerance continue to be essential to many of those who use RAID, performance issues are being given about as much attention. There are in fact whole classes of implementers who build RAID arrays solely for performance considerations, with no redundancy or data protection at all. Even those who do employ redundancy obviously care about getting the most from their array hardware. The key to performance increases under RAID is parallelism. The ability to access multiple disks simultaneously allows for data to be written to or read from a RAID array faster than would be possible with a single drive. In fact, RAID is in some ways responsible for the demise of esoteric high-performance hard disk designs, such as drives with multiple actuators. A multiple-drive array essentially has "multiple actuators" without requiring special engineering; it's a win-win solution for both manufacturers (which hate low-volume specialty products) and consumers (which hate the price tags that come with low-volume specialty products). There's no possible way to discuss every factor that affects RAID performance in a separate section like this one--and there really isn't any point in doing so anyway. As you read about RAID levels, and RAID implementation and configuration, many issues that are related to performance will come up. In this section I want to explore some of the fundamentals though, the basic concepts that impact overall array performance. One of my goals is to try to define better what exactly is meant by performance in a RAID context. Most people who know something about RAID would say "RAID improves performance", but some types improve it better than others, and in different ways than others. Understanding this will help you differentiate between the different RAID levels on a performance basis. Read and Write Performance Hard disks perform two distinct functions: writing data, and then reading it back. In most ways, the electronic and mechanical processes involved in these two operations are very similar. However, even within a single hard disk, read and write performance are often different in small but important ways. This is discussed in more detail here. When it comes to RAID, the differences between read and write performance are magnified. Because of the different ways that disks can be arranged in arrays, and the different ways data can be stored, in some cases there can be large discrepancies in how "method A" compares to "method B" for read performance, as opposed to write performance. The fundamental difference between reading and writing under RAID is this: when you write data in a redundant environment, you must access every place where that data is stored; when you read the data back, you only need to read the minimum amount of data necessary to retrieve the actual data--the redundant information does not need to be

accessed on a read. OK, this isn't as complicated as I probably just made it sound. :^) Let's see how various storage techniques used in RAID differ in this regard:

Mirroring: Read performance under mirroring is far superior to write performance. Let's suppose you are mirroring two drives under RAID 1. Every piece of data is duplicated, stored on both drives. This means that every byte of data stored must be written to both drives, making write performance under RAID 1 actually a bit slower than just using a single disk; even if it were as fast as a single disk, both drives are tied up during the write. But when you go to read back the data? There's absolutely no reason to access both drives; the controller, if intelligently programmed, will only ask one of the drives for the data--the other drive can be used to satisfy a different request. This makes RAID significantly faster than a single drive for reads, under most conditions. Striping Without Parity: A RAID 0 array has about equal read and write performance (or more accurately, roughly the same ratio of read to write performance that a single hard disk would have.) The reason is that the "chopping up" of the data without parity calculation means you must access the same number of drives for reads as you do for writes. Striping With Parity: As with mirroring, write performance when striping with parity (RAID levels 3 through 6) is worse than read performance, but unlike mirroring, the "hit" taken on a write when doing striping with parity is much more significant. Here's how the different accesses fare: o For reads, striping with parity can actually be faster than striping without parity. The parity information is not needed on reads, and this makes the array behave during reads in a way similar to a RAID 0 array, except that the data is spread across one extra drive, slightly improving parallelism. o For sequential writes, there is the dual overhead of parity calculations as well as having to write to an additional disk to store the parity information. This makes sequential writes slower than striping without parity. o The biggest discrepancy under this technique is between random reads and random writes. Random reads that only require parts of a stripe from one or two disks can be processed in parallel with other random reads that only need parts of stripes on different disks. In theory, random writes would be the same, except for one problem: every time you change any block in a stripe, you have to recalculate the parity for that stripe, which requires two writes plus reading back all the other pieces of the stripe! Consider a RAID 5 array made from five disks, and a particular stripe across those disks that happens to have data on drives #3, #4, #5 and #1, and its parity block on drive #2. You want to do a small "random write" that changes just the block in this stripe on drive #3. Without the parity, the controller could just write to drive #3 and it would be done. With parity though, the change to drive #3 affects the parity information for the entire stripe. So this single write turns into a read of drives #4, #5 and #1, a parity calculation, and then a write to drive #3 (the data) and drive #2 (the newly-recalculated parity information). This is why striping with parity stinks for random write performance. (This is also why RAID 5


implementations in software are not recommended if you are interested in performance.) Another hit to write performance comes from the dedicated parity drive used in certain striping with parity implementations (in particular, RAID levels 3 and 4). Since only one drive contains parity information, every write must write to this drive, turning it into a performance bottleneck. Under implementations with distributed parity, like RAID 5, all drives contain data and parity information, so there is no single bottleneck drive; the overheads mentioned just above still apply though.

Note: As if the performance hit for writes under striping with parity weren't bad enough, there is even one more piece of overhead! The controller has to make sure that when it changes data and its associated parity, all the changes happen simultaneously; if the process were interrupted in the middle, say, after the data were changed and not the parity, the integrity of the array would be compromised. To prevent this, a special process must be used, sometimes called a two-phase commit. This is similar to the techniques used in database operations, for example, to make sure that when you transfer money from your checking account to your savings account, it doesn't get subtracted from one without being certain that it was added to the other (or vice-versa). More overhead, more performance slowdown. The bottom line that results from the difference between read and write performance is that many RAID levels, especially ones involving striping with parity, provide far better net performance improvement based on the ratio of reads to writes in the intended application. Some applications have a relatively low number of writes as a percentage of total accesses; for example, a web server. For these applications, the very popular RAID 5 solution may be an ideal choice. Other applications have a much higher percentage of writes; for example, an interactive database or development environment. These applications may be better off with a RAID 01 or 10 solution, even if it does cost a bit more to set up.