You are on page 1of 20

RAID LEVELS

Basic functions

At the very simplest level, RAID combines multiple hard drives into a single logical unit. Thus, instead of seeing several different
hard drives, the operating system sees only one. RAID is typically used on server computers, and is usually (but not necessarily)
implemented with identically sized disk drives. With decreases in hard drive prices and wider availability of RAID options built into
motherboard chipsets, RAID is also being found and offered as an option in more advanced personal computers. This is especially
true in computers dedicated to storage-intensive tasks, such as video and audio editing.

History

Norman Ken Ouchi at IBM was awarded a 1978 US patent 4,092.732 [1] titled "System for recovering data stored in failed memory
unit" and the claims for this patent describe what would later be termed RAID 5 with full stripe writes. This 1978 patent also
mentions that disk mirroring or duplexing (what would later be termed RAID 1) and protection with dedicated parity (that would
later be termed RAID 4) were prior art at that time.

The original RAID specification suggested a number of prototype "RAID levels", or combinations of disks. Each had theoretical
advantages and disadvantages. Over the years, different implementations of the RAID concept have appeared. Most differ
substantially from the original idealized RAID levels, but the numbered names have remained. This can be confusing, since one
implementation of RAID 5, for example, can differ substantially from another. RAID 3 and RAID 4 are often confused and even used
interchangeably.

RAID technology was first defined by David A. Patterson, Garth A. Gibson and Randy Katz at the University of California, Berkeley
in 1987. They studied the possibility of using two or more disks to appear as a single device to the host system and published a
paper: "A case for Redundant Arrays of Inexpensive Disks (RAID)" in June 1988 at the SIGMOD conference. [2] Their paper formally
defined RAID levels 1 through 5:

• section 7: "First Level RAID: Mirrored Disks"


• section 8: "Second Level RAID: Hamming Code for Error Correction"
• section 9: "Third Level RAID: Single Check Disk Per Group"
• section 10: "Fourth Level RAID: Independent Reads and Writes"
• section 11: "Fifth Level RAID: Spread data/parity over all disks (no single check disk)"

Their paper is also the origination of the term "RAID", to which the second meaning "Independent" for the "I" has been since used.
[3]

The very definition of RAID has been argued over the years. The use of the term redundant leads many to object to RAID 0 being
called a RAID at all. [citation needed] Similarly, the change from inexpensive to independent confuses many as to the intended purpose of
RAID. [citation needed] There are even some single-disk implementations of the RAID concept. [citation needed]

For the purpose of this article, it is best to assume that any system which employs the basic RAID concepts to combine physical
disk space for purposes of reliability, capacity, or performance is a RAID system.

RAID implementations

Hardware vs. software

The distribution of data across multiple disks can be managed by either dedicated hardware or by software. Additionally, there are
hybrid RAIDs that are partially software AND hardware-based solutions.

With a software implementation, the operating system manages the disks of the array through the normal drive controller
(IDE/ATA, SATA, SCSI, Fibre Channel, etc.). With present CPU speeds, software RAID can be faster than hardware RAID[4], though
at the cost of using CPU power which might be best used for other tasks. One major exception is where the hardware
implementation of RAID incorporates a battery backed-up write back cache which can speed up an application, such as an OLTP
database server. In this case, the hardware RAID implementation flushes the write cache to secure storage to preserve data at a
known point if there is a crash. The hardware approach is faster than accessing the disk drive and limited by RAM speeds, the rate
at which the cache can be mirrored to another controller, the amount of cache and how fast it can flush the cache to disk. For this
reason, battery-backed caching disk controllers are often recommended for high transaction rate database servers. In the same
situation, the software solution is limited to no more flushes than the number of rotations or seeks per second of the drives.
Another disadvantage of a pure software RAID is that, depending on the disk that fails and the boot arrangements in use, the
computer may not be able to be rebooted until the array has been rebuilt.
A hardware implementation of RAID requires at a minimum a special-purpose RAID controller. On a desktop system, this may be a
PCI expansion card, or might be a capability built in to the motherboard. In larger RAIDs, the controller and disks are usually
housed in an external multi-bay enclosure. The disks may be IDE/ATA, SATA, SCSI, Fibre Channel, or any combination thereof. The
controller links to the host computer(s) with one or more high-speed SCSI, PCIe, Fibre Channel or iSCSI connections, either
directly, or through a fabric, or is accessed as network-attached storage. This controller handles the management of the disks, and
performs parity calculations (needed for many RAID levels). This option tends to provide better performance, and makes operating
system support easier. Hardware implementations also typically support hot swapping, allowing failed drives to be replaced while
the system is running. In rare cases hardware controllers have become faulty, which can result in data loss. Hybrid RAIDs have
become very popular with the introduction of inexpensive hardware RAID controllers. The hardware is a normal disk controller that
has no RAID features, but there is a boot-time application that allows users to set up RAIDs that are controlled via the BIOS. When
any modern operating system is used, it will need specialized RAID drivers that will make the array look like a single block device.
Since these controllers actually do all calculations in software, not hardware, they are often called "fakeraids". Unlike software
RAID, these "fakeraids" typically cannot span multiple controllers.

Both hardware and software versions may support the use of a hot spare, a preinstalled drive which is used to immediately (and
almost always automatically) replace a drive that has failed. This reduces the mean time to repair period during which a second
drive failure in the same RAID redundancy group can result in loss of data.

Some software RAID systems allow one to build arrays from partitions instead of whole disks. Unlike Matrix RAID they are not
limited to just RAID 0 and RAID 1 and not all partitions need to be RAID.

Standard RAID levels

Main article: Standard RAID levels

A quick summary of the most commonly used RAID levels:

• RAID 0: Striped Set


• RAID 1: Mirrored Set
• RAID 5: Striped Set with Distributed Parity

Common nested RAID levels:

• RAID 01: A mirror of stripes


• RAID 10: A stripe of mirrors
• RAID 50: A stripe across dedicated parity RAID systems
• RAID 100: A stripe of a stripe of mirrors

Nested RAID levels

Main article: Nested RAID levels

Many storage controllers allow RAID levels to be nested. That is, one RAID can use another as its basic element, instead of using
physical disks. It is instructive to think of these arrays as layered on top of each other, with physical disks at the bottom.

Nested RAIDs are usually signified by joining the numbers indicating the RAID levels into a single number, sometimes with a '+' in
between. For example, RAID 10 (or RAID 1+0) conceptually consists of multiple level 1 arrays stored on physical disks with a level
0 array on top, striped over the level 1 arrays. In the case of RAID 0+1, it is most often called RAID 0+1 as opposed to RAID 01 to
avoid confusion with RAID 1. However, when the top array is a RAID 0 (such as in RAID 10 and RAID 50), most vendors choose to
omit the '+', though RAID 5+0 is more informative.

Non-standard RAID levels

Main article: Non-standard RAID levels

Given the large amount of custom configurations available with a RAID array, many companies, organizations, and groups have
created their own non-standard configurations, typically designed to meet at least one but usually very small niche groups of
arrays. Most of these non-standard RAID levels are proprietary.

Some of the more prominent modifications are:

• ATTO Technology's DVRAID adds RAID protection to systems delivering high-definition audio and video
• The Storage Computer Corporation uses RAID 7 add caching to RAID 3 and RAID 4 to improve performance
• EMC Corporation offer's RAID S as an alternative to RAID 5 on their Symmetrix systems.
• RAID-Z in the zfs filesystem of OpenSolaris solves the "write hole" problem of RAID-5.

What RAID Can and Cannot Do

This guide was taken from a thread in a RAID-related forum to help clarify the advantages and disadvantages to choosing RAID for
either increases in performance or redundancy. It contains links to other threads in its forum containing user-generated anecdotal
reviews of their RAID experiences.

What RAID Can Do

• RAID can protect uptime. RAID levels 1, 0+1/10, 5, and 6 (and their variants such as 50 and 51) allow a mechanical hard
disk to fail while keeping the data on the array accessible to users. Rather than being required to perform a time
consuming restore from tape, DVD, or other slow backup media, RAID allows data to be restored to a replacement disk
from the other members of the array, while being simultaneously available to users in a degraded state. This is of high
value to enterprises, as downtime quickly leads to lost earning power. For home users, it can protect uptime of large
media storage arrays, which would require time consuming restoration from dozens of DVD or quite a few tapes in the
event of a disk failing that is not protected by redundancy.

• RAID can increase performance in certain applications. RAID levels 0, and 5-6 all use variations on striping, which allows
multiple spindles to increase sustained transfer rates when conducting linear transfers. Workstation type applications that
work with large files, such as image and video editing applications, benefit greatly from disk striping. The extra throughput
offered by disk striping is also useful in disk-to-disk backups applications. Also if RAID 1 or a striping based RAID with a
sufficiently large block size is used RAID can provide performance improvements for access patterns involving multiple
simultaneous random accesses (e.g., multi-user databases).

What RAID Cannot Do

• RAID cannot protect the data on the array. A RAID array has one file system. This creates a single point of failure. A RAID
array's file system is vulnerable to a wide variety of hazards other than physical disk failure, so RAID cannot defend
against these sources of data loss. RAID will not stop a virus from destroying data. RAID will not prevent corruption. RAID
will not save data from accidental modification or deletion by the user. RAID does not protect data from hardware failure
of any component besides physical disks. RAID does not protect data from natural or man made disaster such as fires and
floods. To protect data, data must be backed up to removable media, such as DVD, tape, or an external hard drive, and
stored in an off site location. RAID alone will not prevent a disaster from turning into data loss. Disaster is not
preventable, but backups allow data loss to be prevented.

• RAID cannot simplify disaster recovery*. When running a single disk, the disk is usually accessible with a generic ATA or
SCSI driver built into most operating systems. However, most RAID controllers require specific drivers. Recovery tools that
work with single disks on generic controllers will require special drivers to access data on RAID arrays. If these recovery
tools are poorly coded and do not allow providing for additional drivers, then a RAID array will probably be inaccessible to
that recovery tool.

• RAID cannot provide a performance boost in all applications. This statement is especially true with typical desktop
application users and gamers. Most desktop applications and games place performance emphasis on the buffer strategy
and seek performance of the disk(s). Increasing raw sustained transfer rate shows little gains for desktop users and
gamers, as most files that they access are typically very small anyway. Disk striping using RAID 0 increases linear transfer
performance, not buffer and seek performance. As a result, disk striping using RAID 0 shows little to no performance gain
in most desktop applications and games, although there are exceptions. For desktop users and gamers with high
performance as a goal, it is better to buy a faster, bigger, and more expensive single disk than it is to run two
slower/smaller drives in RAID 0. Even running the large high quality drive in RAID-0 is unlikely to boost performance more
than 10% and performance may drop in some access patterns, particularly games.

• RAID is not readily moved to a new system*. When using a single disk, it is relatively straightforward to move the disk to a
new system. Simply connect it to the new system, provided it has the same interface available. However, this is not so
easy with a RAID array. A RAID BIOS must be able to read metadata from the array members in order to successfully
construct the array and make it accessible to an operating system. Since RAID controller makers use different formats for
their metadata (even controllers of different families from the same manufacturer may use incompatible metadata
formats) it is virtually impossible to move a RAID array to a different controller. When moving a RAID array to a new
system, plans should be made to move the controller as well. With the popularity of motherboard integrated RAID
controllers, this is extremely difficult to accomplish. Generally, it is possible to move the RAID array members and
controllers as a unit, and software RAID in Linux and Windows Server Products can also work around this limitation, but
software RAID has other limitations (mostly performance related).
• * RAID level 1 does partly circumvent these problems, since each of the disks in the array theoretically contains
exactly the same data as a single disk would, if RAID wasn't used.

Reliability of RAID configurations

Failure rate
The mean time to failure (MTTF) or the mean time between failure (MTBF) of a given RAID may be lower or higher than
those of its constituent hard drives, depending on what type of RAID is employed...
Mean time to data loss (MTTDL)
In this context, the average time before a loss of data in a given array.
Mean time to recovery (MTTR)
In arrays that include redundancy for reliability, this is the time following a failure to restore an array to its normal failure-
tolerant mode of operation. This includes time to replace a failed disk mechanism as well as time to re-build the array (i.e.
to replicate data for redundancy).
Unrecoverable bit error rate (UBE)
This is the rate at which a disk drive will be unable to recover data after application of cyclic redundancy check (CRC)
codes and multiple retries. This failure will present as a sector read failure. Some RAID implementations protect against
this failure mode by remapping the bad sector, using the redundant data to retrieve a good copy of the data, and
rewriting that good data to the newly mapped replacement sector. The UBE rate is typically specified at 1 bit in 1015 for
enterprise class disk drives (SCSI, FC, SAS) , and 1 bit in 1014 for desktop class disk drives (IDE, ATA, SATA). Increasing
disk capacities and large RAID 5 redundancy groups have led to an increasing inability to successfully rebuild a RAID
group after a disk failure because an unrecoverable sector is found on the remaining disks. Double protection schemes
such as RAID 6 are attempting to address this issue, but suffer from a very high write penalty.
Atomic Write Failure
Also known by various terms such as torn writes, torn pages, incomplete writes, interrupted writes, non-transactional, etc.
This is a little understood and rarely mentioned failure mode for redundant storage systems that do not utilize
transactional features. Database researcher Jim Gray wrote "Update in Place is a Poison Apple" during the early days of
relational database commercialization. However, this warning largely went unheeded and fell by the wayside upon the
advent of RAID, which many software engineers mistook as solving all data storage integrity and reliability problems.
Many software programs update a storage object "in-place"; that is, they write a new version of the object on to the same
disk addresses as the old version of the object. While the software may also log some delta information elsewhere, it
expects the storage to present "atomic write semantics," meaning that the write of the data either occurred in its entirety
or did not occur at all.
However, very few storage systems provide support for atomic writes, and even fewer specify their rate of failure in
providing this semantic. Note that during the act of writing an object, a RAID storage device will usually be writing all
redundant copies of the object in parallel, although overlapped or staggered writes are more common when a single RAID
processor is responsible for multiple disks. Hence an error that occurs during the process of writing may leave the
redundant copies in different states, and furthermore may leave the copies in neither the old nor the new state. The little
known failure mode is that delta logging relies on the original data being either in the old or the new state so as to enable
backing out the logical change, yet few storage systems provide an atomic write semantic on a RAID disk.
Since transactional support is not universally present in hardware RAID, many operating systems include transactional
support to protect against data loss during an interrupted write. Novell Netware, starting with version 3.x, included a
transaction tracking system. Microsoft introduced transaction tracking via the journalling feature in NTFS.

Standard RAID levels

From Wikipedia, the free encyclopedia

Jump to: navigation, search


Main article: RAID

The standard RAID levels are a basic set of RAID configurations and employ striping, mirroring, or parity. The standard RAID
levels can be nested for other benefits (see Nested RAID levels).

Contents

• 1 Error-correction codes
• 2 RAID 0
o 2.1 RAID 0 failure rate
o 2.2 RAID 0 performance
• 3 Concatenation (JBOD)
• 4 RAID 1
o 4.1 RAID 1 failure rate
o 4.2 RAID 1 performance
• 5 RAID 2
• 6 RAID 3
• 7 RAID 4
• 8 RAID 5
o 8.1 RAID 5 parity handling
o 8.2 RAID 5 disk failure rate
o 8.3 RAID 5 performance
o 8.4 RAID 5 usable size
• 9 RAID 6
o 9.1 RAID 6 performance
o 9.2 RAID 6 implementation
• 10 RAID 5E, RAID 5EE and RAID 6E

• 11 See also

Error-correction codes

For RAID 2 through 5 an error-correcting code is used to provide redundancy of the data.

For RAID 2, a Hamming code is used. For this level, extra disks are needed to store the error-correcting bits ("check disks"
according to Patterson, et. al.).

The remaining levels use standard parity bits by using the XOR logical function. For example, given the following three bytes:

• A1 = 00000111
• A2 = 00000101
• A3 = 00000000

Taking the XOR of all of these yields:

= 00000010

In terms of parity, the parity byte creates even parity. This means that the sum of 1s for each bit position yields an even number.
In this example, the 2nd bit position from the right has one 1 while 1st and 3rd have two 1s; including the parity yields two (even)
parity for these three bit positions.

The advantage of using parity is when one disk is lost (primary interest is when it is lost due to hardware failure). For example, say
the disk containing A2 is lost leaving A1, A3, and Ap to reconstruct A2. This can be done by using the XOR operation again:

= 00000101

This value clearly matches the above definition of A2. This process can then be repeated for the remainder of the data.

All of the above examples use only three data bytes and one parity byte. (This is called a "stripe" in the remainder of this article
and are given the same color in the diagrams.) When used in conjunction with RAID these operations happen on blocks (the
fundamental operational/usable unit of storage in computer storage) instead of individual bytes.

RAID 0
Diagram of a RAID 0 setup.

A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped) with no parity
information for redundancy. It is important to note that RAID 0 was not one of the original RAID levels and provides zero
redundancy. RAID 0 is normally used to increase performance, although it can also be used as a way to create a small number of
large virtual disks out of a large number of small physical ones.

A RAID 0 can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of
the smallest disk. For example, if a 120 GB disk is striped together with a 100 GB disk, the size of the array will be

= 2 * 100GB
= 200GB

In the diagram to the right, the odd blocks are written to disk 0 while the even blocks are written to disk 1 such that A1, A2, A3,
A4, ... would be order of blocks read if read sequentially from the beginning.

RAID 0 failure rate

Although RAID 0 was not specified in the original RAID paper, an idealized implementation of RAID 0 would split I/O operations into
equal-sized blocks and spread them evenly across two disks. RAID 0 implementations with more than two disks are also possible,
however the group reliability decreases with member size.

Reliability of a given RAID 0 set is equal to the average reliability of each disk divided by the number of disks in the set:

That is, reliability (as measured by mean time to failure (MTTF) or mean time between failures (MTBF) is roughly inversely
proportional to the number of members — so a set of two disks is roughly half as reliable as a single disk (in other words, the
probability of a failure is roughly proportional to the number of members. If there were a probability of 5% that the disk would die
within three years, in a two disk array, that probability would be upped to 1 − (1 − 0.05)2 = 0.0975 = 9.75%). The reason for this
is that the file system is distributed across all disks. When a drive fails the file system cannot cope with such a large loss of data
and coherency since the data is "striped" across all drives (the data cannot be recovered with the missing disk). Data can be
recovered using special tools (see data recovery), however, this data will be incomplete and most likely corrupt, and recovery of
drive data is very costly nor guaranteed.

RAID 0 performance

While the block size can technically be as small as a byte it is almost always a multiple of the hard disk sector size of 512 bytes.
This lets each drive seek independently when randomly reading or writing data on the disk. How much the drives act independently
depends on the access pattern from the filesystem level. For reads and writes that are larger than the stripe size such as copying
files or video playback the disks will be seeking to the same position on each disk so the seek time of the array will be the same as
that of a single non raid drive. For reads and writes that are smaller than the stripe size such as database access the drives will be
able to seek to independently. If the sectors accessed are spread evenly between the two drives then the apparent seek time of the
array will be half that of a single non raid drive (assuming identical disks in the array). The transfer speed of the array will be the
transfer speed of all the disks added together, limited only by the speed of the RAID controller. Note that these performance
scenarios are in the best case with optimal access patterns.

RAID 0 is useful for setups such as large read-only NFS servers where mounting many disks is time-consuming or impossible and
redundancy is irrelevant. Another use is where the number of disks is limited by the operating system. In Microsoft Windows, the
number of drive letters for hard disk drives may be limited to 24, so RAID 0 is a popular way to use more disks. It is possible in
Windows 2000 Professional and newer to mount partitions under directories, much like Unix, and hence eliminating the need for a
partition to be assigned a drive letter. RAID 0 is also a popular choice for gaming systems where performance is desired, data
integrity is not very important, but cost is a consideration to most users. However, since data is shared between drives without
redundancy, hard drives cannot be swapped out as all disks are dependent upon each other.

NOTE: Some sites have stated that for home PCs, the speed advantages are debatable. [1][2]

Concatenation (JBOD)

Diagram of a JBOD setup with 3 unequally-sized disks

Although a concatenation of disks (also called JBOD, or "Just a Bunch Of Disks") is not one of the numbered RAID levels, it is a
popular method for combining multiple physical disk drives into a single virtual one. As the name implies, disks are merely
concatenated together, end to beginning, so they appear to be a single large disk.

Concatenation may be thought of as the reverse of partitioning. Whereas partitioning takes one physical drive and creates two or
more logical drives, JBOD uses two or more physical drives to create one logical drive.

In that it consists of an Array of Independent Disks (no redundancy), it can be thought of as a distant relation to RAID. JBOD is
sometimes used to turn several odd-sized drives into one larger useful drive, which cannot be done with RAID 0. For example,
JBOD could use a 3 GB, 15 GB, 5.5 GB, and 12 GB drive to combine into a logical drive at 35.5 GB, which is often more useful than
the individual drives separately.

In the diagram to the right, data is concatenated from the end of disk 0 (block A63) to the beginning of disk 1 (block A64); end of
disk 1 (block A91) to the beginning of disk 2 (block A92). If RAID 0 were used, then disk 0 and disk 2 would be truncated to 28
blocks (disk 1 contains 28 blocks) for a total size of 84 blocks.

JBOD is similar to the widely used Logical Volume Manager (LVM) and Logical Storage Manager (LSM) in UNIX and UNIX-based
operating systems (OS). JBOD is useful for OSs which do not support LVM/LSM (like MS-Windows, although Windows Server 2003,
Windows XP Pro, and Windows 2000 support software JBOD, known as spanning dynamic disks). The difference between JBOD and
LVM/LSM is that the address remapping between the logical address of the concatenated device and the physical address of the
disc is done by the RAID hardware instead of the OS kernel as it is LVM/LSM.

One advantage JBOD has over RAID 0 is in the case of drive failure. Whereas in RAID 0, failure of a single drive will usually result
in the loss of all data in the array, in a JBOD array only the data on the affected drive is lost, and the data on surviving drives will
remain readable. However, JBOD does not carry the performance benefits which are associated with RAID 0. This does not address
file system coherency as a loss of a large portion of the file system will likely render it inoperable, but raw data access to the
surviving disks will yield readable data because the data has not be striped acrossed to the failed disk.

Note: Some Raid cards (e.g. 3ware) use JBOD to refer to configuring drives without raid features including concatenation. Each
drive shows up separately in the OS.
Note: Many Linux distributions refer to JBOD as "linear mode" or "append mode." The Mac OS 10.4 implementation - called a
"Concatenated Disk Set" - does NOT leave the user with any usable data on the remaining drives if one drive fails in a
"Concatenated Disk Set," although the disks do have the write performance documented in the illustration above.

RAID 1

A RAID 1 creates an exact copy (or mirror) of a set of data on two or more disks. This is useful when read performance or
reliability are more important than data capacity. Such an array can only be as big as the smallest member disk. A classic RAID 1
mirrored pair contains two disks (see diagram), which increases reliability exponentially over a single disk. Since each member
contains a complete copy of the data, and can be addressed independently, ordinary wear-and-tear reliability is raised by the power
of the number of self-contained copies.

RAID 1 failure rate

For example, consider a RAID 1 with two identical models of a disk drive with a weekly probability of failure of 1:500. Assuming
defective drives are replaced weekly, the installation would carry a 1:250,000 probability of failure for a given week. That is, the
likelihood that the RAID array is down due to mechanical failure during any given week is the product of the likelihoods of failure of
both drives. In other words, the probability of failure is 1 in 500 and if the failures are statistically independent then the probability

of both drives failing is .

This is purely theoretical however, the chance of a failure is much higher because drives are often manufactured at the same time
and subjected to the same stresses. If a failure is because of an environmental problem, it's quite likely that the other drive will fail
shortly after the first.

RAID 1 performance

Additionally, since all the data exists in two or more copies, each with its own hardware, the read performance can go up roughly
as a linear multiple of the number of copies. That is, a RAID 1 array of two drives can be reading in two different places at the
same time, though not all implementations of RAID 1 do this[3]. To maximize performance benefits of RAID 1, independent disk
controllers are recommended, one for each disk. Some refer to this practice as splitting or duplexing. When reading, both disks
can be accessed independently and requested sectors can be split evenly between the disks. For the usual mirror of two disks this
would double the transfer rate. The apparent access time of the array would be half that of a single non raid drive. Unlike RAID 0
this would be for all access patterns as all the data is present on all the disks. Read performance can be further improved by
adding drives to the mirror. Three disks would give you three times the throughput and an apparent seek time of a third. The only
limit is how many disks can be connected to the controller and its maximum transfer speed. Many older IDE RAID 1 cards read
from one disk in the pair, so their read performance is that of a single disk. Some older RAID 1 implementations would also read
both disks simultaneously and compare the data to catch errors. The error detection and correction on modern disks makes this
less useful in environments requiring normal commercial availability. When writing, the array performs like a single disk as all
mirrors must be written with the data. Note that these performance scenarios are in the best case with optimal access patterns.

RAID 1 has many administrative advantages. For instance, in some 365/24 environments, it is possible to "Split the Mirror":
declare one disk as inactive, do a backup of that disk, and then "rebuild" the mirror. This requires that the application support
recovery from the image of data on the disk at the point of the mirror split. This procedure is less critical in the presence of the
"snapshot" feature of some filesystems, in which some space is reserved for changes, presenting a static point-in-time view of the
filesystem. Alternatively, a set of disks can be kept in much the same way as traditional backup tapes are.

RAID 2

A RAID 2 stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. The disks are
synchronized by the controller to spin in perfect tandem. This is the only original level of RAID that is not currently used. Extremely
high data transfer rates are possible.

The use of the Hamming(7,4) code also permits using 7 disks in RAID 2, with 4 being used for data storage and 3 being used for
error correction.

RAID 3

Diagram of a RAID 3 setup of 6-byte blocks and two parity bytes, shown are two blocks of data (orange and green)

A RAID 3 uses byte-level striping with a dedicated parity disk. RAID 3 is very rare in practice. One of the side-effects of RAID 3 is
that it generally cannot service multiple requests simultaneously. This comes about because any single block of data will, by
definition, be spread across all members of the set and will reside in the same location. So, any I/O operation requires activity on
every disk.

In our example above, a request for block "A" consisting of bytes A1-A6 would require all three data disks to seek to the beginning
(A1) and reply with their contents. A simultaneous request for block B would have to wait.

[edit] RAID 4

Diagram of a RAID 4 setup with a dedicated parity disk 3 with each color representing the stripe of blocks in the respective parity
block

A RAID 4 uses block-level striping with a dedicated parity disk. RAID 4 looks similar to RAID 5 except that it does not use
distributed parity, and similar to RAID 3 except that it stripes at the block, rather than the byte level. This allows each member of
the set to act independently when only a single block is requested. If the disk controller allows it, a RAID 4 set can service multiple
read requests simultaneously.

In our example, a request for block "A1" would be serviced by disk 1. A simultaneous request for block B1 would have to wait, but
a request for B2 could be serviced concurrently.

RAID 5

Diagram of a RAID 5 setup with distributed parity with each color representing the group of blocks in the respective parity block (a
stripe)

A RAID 5 uses block-level striping with parity data distributed across all member disks. RAID 5 has achieved popularity due to its
low cost of redundancy. Generally, RAID 5 is implemented with hardware support for parity calculations. A minimum of 3 disks is
generally required for a complete RAID 5 configuration (A RAID 5 two disk set is possible, but many implementations do not allow
for this. In some implementations a degraded disk set can be made (3 disk set of which 2 are online))

In the example above, a read request for block "A1" would be serviced by disk 1. A simultaneous read request for block B1 would
have to wait, but a read request for B2 could be serviced concurrently.
RAID 5 parity handling

Every time a block is written to a disk in a RAID 5, a parity block is generated within the same stripe. A block is often composed of
many consecutive sectors on a disk. A series of blocks (a block from each of the disks in an array) is collectively called a "stripe". If
another block, or some portion of a block, is written on that same stripe, the parity block (or some portion of the parity block) is
recalculated and rewritten. For small writes, this requires reading the old data, writing the new parity, and writing the new data.
The disk used for the parity block is staggered from one stripe to the next, hence the term "distributed parity blocks". RAID 5
writes are expensive in terms of disk operations and traffic between the disks and the controller.

The parity blocks are not read on data reads, since this would be unnecessary overhead and would diminish performance. The
parity blocks are read, however, when a read of a data sector results in a cyclic redundancy check (CRC) error. In this case, the
sector in the same relative position within each of the remaining data blocks in the stripe and within the parity block in the stripe
are used to reconstruct the errant sector. The CRC error is thus hidden from the main computer. Likewise, should a disk fail in the
array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to
reconstruct the data on the failed drive "on the fly".

This is sometimes called Interim Data Recovery Mode. The computer knows that a disk drive has failed, but this is only so that the
operating system can notify the administrator that a drive needs replacement; applications running on the computer are unaware
of the failure. Reading and writing to the drive array continues seamlessly, though with some performance degradation. The
difference between RAID 4 and RAID 5 is that, in interim data recovery mode, RAID 5 might be slightly faster than RAID 4,
because, when the CRC and parity are in the disk that failed, the calculation does not have to be performed, while with RAID 4, if
one of the data disks fails, the calculations have to be performed with each access.

In RAID 5, where there is a single parity block per stripe, the failure of a second drive results in total data loss. Whereas the Master
Boot Record (MBR) table is written separately in all the Physical drives.

RAID 5 disk failure rate

The maximum number of drives in a RAID 5 redundancy group is theoretically unlimited, but it is common practice to limit the
number of drives. The tradeoffs of larger redundancy groups are greater probability of a simultaneous double disk failure, the
increased time to rebuild a redundancy group, and the greater probability of encountering an unrecoverable sector during RAID
reconstruction. As the number of disks in a RAID 5 group increases, the MTBF (failure rate) can become lower than that of a single
disk. This happens when the likelihood of a second disk failing out of (N-1) dependent disks, within the time it takes to detect,
replace and recreate a first failed disk, becomes larger than the likelihood of a single disk failing. RAID 6 is an alternative that
provides dual parity protection thus enabling larger numbers of disks per RAID group.

Some RAID vendors will avoid placing disks from the same manufacturing lot in a redundancy group to minimize the odds of
simultaneous early life and end of life failures as evidenced by the bathtub curve.

RAID 5 performance

RAID 5 implementations suffer from poor performance when faced with a workload which includes many writes which are smaller
than the capacity of a single stripe; this is because parity must be updated on each write, requiring read-modify-write sequences
for both the data block and the parity block. More complex implementations often include non-volatile write back cache to reduce
the performance impact of incremental parity updates.

The read performance of RAID 5 is almost as good as RAID 0 for the same number of discs. If you ignore the parity blocks the on
disc layout looks exactly like that of RAID 0. The reason RAID 5 is slightly slower is that the disks must skip over the parity blocks.

In the event of a system failure while there are active writes, the parity of a stripe may become inconsistent with the data. If this is
not detected and repaired before a disk or block fails, data loss may ensue as incorrect parity will be used to reconstruct the
missing block in that stripe. This potential vulnerability is sometimes known as the "write hole." Battery-backed cache and other
techniques are commonly used to reduce the window of vulnerability of this occurring.

RAID 5 usable size

The user capacity of a RAID 5 array is , where N is the


total number of drives in the array and Si is the capacity of the ith drive and, thus, Smin is the capacity of the smallest drive in the
array.

RAID 6

Diagram of a RAID 6 setup which is just like RAID 5 but with two parity blocks instead of one
A RAID 6 extends RAID 5 by adding an additional parity block, thus it uses block-level striping with two parity blocks distributed
across all member disks. It was not one of the original RAID levels.

RAID 5 can be seen as a special case of a Reed-Solomon code[4]. RAID 5, being a degenerate case, requires only addition in the

Galois field. Since we are operating on bits, the field used is a binary galois field . In cyclic representations of binary
galois fields, addition is computed by a simple XOR.

After understanding RAID 5 as a special case of a Reed-Solomon code, it is easy to see that it is possible to extend the approach to

produce redundancy simply by producing another syndrome; typically a polynomial in (8 means we are operating on bytes).
By adding additional syndromes it is possible to achieve any number of redundant disks, and recover from the failure of that many
drives anywhere in the array, but RAID 6 refers to the specific case of two syndromes.

Like RAID 5 the parity is distributed in stripes, with the parity blocks in a different place in each stripe.

RAID 6 performance

RAID 6 is inefficient when used with a small number of drives but as arrays become bigger and have more drives the loss in
storage capacity becomes less important and the probability of two disks failing at once is bigger. RAID 6 provides protection
against double disk failures and failures while a single disk is rebuilding. In the case where there is only one array it may make
more sense than having a hot spare disk.

The user capacity of a RAID 6 array is , where N is the


total number of drives in the array, Si is the capacity of the ith, and Smin is the capacity of the smallest drive in the array.

RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations due
to the overhead associated with the additional parity calculations. This penalty can be minimized by coalescing writes in fewer
stripes, which can be achieved by a Write Anywhere File Layout.

RAID 6 implementation

According to SNIA (Storage Networking Industry Association), the definition of RAID 6 is: "Any form of RAID that can continue to
execute read and write requests to all of a RAID array's virtual disks in the presence of any two concurrent disk failures. Several
methods, including dual check data computations (parity and Reed Solomon), orthogonal dual parity check data and diagonal parity
have been used to implement RAID Level 6."

RAID 5E, RAID 5EE and RAID 6E

RAID 5E, RAID 5EE and RAID 6E generally refer to variants of RAID 5 or RAID 6 with online (hot) spare drives, where the spare
drives are an active part of the block rotation scheme. This allows the I/O to be spread across all drives, including the spare, thus
reducing the I/O bandwidth per drive, allowing for higher performance. It does, however, mean that a spare drive cannot be shared
among multiple arrays, which is occasionally desirable.

In RAID 5E, RAID 5EE and RAID 6E, there is no dedicated "spare drive", just like there is no dedicated "parity drive" in RAID 5 or
RAID 6. Instead, the spare blocks are distributed across all the drives, so that in a 10-disk RAID 5E with one spare, each and every
disk is 80% data, 10% parity, and 10% spare. The spare blocks in RAID 5E and RAID 6E are at the end of the array, while in RAID
5EE the spare blocks are integrated into the array. RAID 5EE level can sustain a single drive failure. RAID 5EE requires at least four
disks and can expand up to 16 disks.

Nested RAID levels

From Wikipedia, the free encyclopedia

Jump to: navigation, search


Main article: RAID

To gain performance and/or additional redundancy the Standard RAID levels can be combined to create hybrid or Nested RAID
levels.

Contents

• 1 Nesting
• 2 RAID 0+1
• 3 RAID 10
• 4 Raid 0+3 and 3+0
o 4.1 RAID 0+3
o 4.2 RAID 30
• 5 RAID 100 (RAID 10+0)
• 6 RAID 50 (RAID 5+0)
• 7 RAID 60 (RAID 6+0)

• 8 See also

Nesting

When nesting RAID levels, a RAID type that provides redundancy is typically combined with RAID 0 to boost performance. With
these configurations it is preferable to have RAID 0 on top and the redundant array at the bottom, because fewer disks then need
to be regenerated when a disk fails. (Thus, RAID 10 is preferable to RAID 0+1 but the administrative advantages of "splitting the
mirror" of RAID 1 would be lost).

RAID 0+1

Block diagram of a RAID 0+1 setup.

A RAID 0+1 (also called RAID 01, not to be confused with RAID 1), is a RAID used for both replicating and sharing data among
disks. The difference between RAID 0+1 and RAID 1+0 is the location of each RAID system — RAID 0+1 is a mirror of stripes.
Consider an example of RAID 0+1: six 120 GB drives need to be set up on a RAID 0+1. Below is an example where two 360 GB
level 0 arrays are mirrored, creating 360 GB of total storage space:

RAID 1
.--------------------------.
| |
RAID 0 RAID 0
.-----------------. .-----------------.
| | | | | |
120 GB 120 GB 120 GB 120 GB 120 GB 120 GB
A1 A2 A3 A1 A2 A3
A4 A5 A6 A4 A5 A6
A7 A8 A9 A7 A8 A9
A10 A11 A12 A10 A11 A12
Note: A1, A2, et cetera each represent one data block; each column represents one disk.

The maximum storage space here is 360 GB, spread across two arrays. The advantage is that when a hard drive fails in one of the
level 0 arrays, the missing data can be transferred from the other array. However, adding an extra hard drive to one stripe requires
you to add an additional hard drive to the other stripes to balance out storage among the arrays.

It is not as robust as RAID 10 and cannot tolerate two simultaneous disk failures, unless the second failed disk is from the same
stripe as the first. That is, once a single disk fails, each of the mechanisms in the other stripe is single point of failure. Also, once
the single failed mechanism is replaced, in order to rebuild its data all the disks in the array must participate in the rebuild.

The exception to this is if all the disks are hooked up to the same raid controller in which case the controller can do the same error
recovery as RAID 10 as it can still access the functional disks in each RAID 0 set. If you compare the diagrams between RAID 0+1
and RAID 10 and ignore the lines above the disks you will see that all that's different is that the disks are swapped around. If the
controller has a direct link to each disk it can do the same. In this one case there is no difference between RAID 0+1 and RAID 10.

With increasingly larger capacity disk drives (driven by serial ATA drives), the risk of drive failure is increasing. Additionally, bit
error correction technologies have not kept up with rapidly rising drive capacities, resulting in higher risks of encountering media
errors. In the case where a failed drive is not replaced in a RAID 0+1 configuration, a single uncorrectable media error occurring on
the mirrored hard drive would result in data loss.

Given these increasing risks with RAID 0+1, many business and mission critical enterprise environments are beginning to evaluate
more fault tolerant RAID setups that add underlying disk parity. Among the most promising are hybrid approaches such as RAID
0+1+5 (mirroring above single parity) or RAID 0+1+6 (mirroring above dual parity).

RAID 10

Diagram of a RAID 10 setup.

A RAID 10, sometimes called RAID 1+0, or RAID 1&0, is similar to a RAID 0+1 with exception that the RAID levels used are
reversed — RAID 10 is a stripe of mirrors. Below is an example where three collections of 120 GB level 1 arrays are striped
together to make 360 GB of total storage space:

RAID 0
.-----------------------------------.
| | |
RAID 1 RAID 1 RAID 1
.--------. .--------. .--------.
| | | | | |
120 GB 120 GB 120 GB 120 GB 120 GB 120 GB
A1 A1 A2 A2 A3 A3
A4 A4 A5 A5 A6 A6
A7 A7 A8 A8 A9 A9
A10 A10 A11 A11 A12 A12
Note: A1, A2, et cetera each represent one data block; each column represents one disk.

All but one drive from each RAID 1 set could fail without damaging the data. However, if the failed drive is not replaced, the single
working hard drive in the set then becomes a single point of failure for the entire array. If that single hard drive then fails, all data
stored in the entire array is lost. As is the case with RAID 0+1, if a failed drive is not replaced in a RAID 10 configuration then a
single uncorrectable media error occurring on the mirrored hard drive would result in data loss. Some RAID 10 vendors address
this problem by supporting a "hot spare" drive, which automatically replaces and rebuilds a failed drive in the array.

Given these increasing risks with RAID 10, many business and mission critical enterprise environments are beginning to evaluate
more fault tolerant RAID setups that add underlying disk parity. Among the most promising are hybrid approaches such as RAID
0+1+5 (mirroring above single parity) or RAID 0+1+6 (mirroring above dual parity).

RAID 10 is often the primary choice for high-load databases, because the lack of parity to calculate gives it faster write speeds.

RAID 10 Capacity: (Size of Smallest Drive) * (Number of Drives) / 2

The Linux kernel RAID10 implementation (from version 2.6.9 and onwards) is not nested. The mirroring and striping is done in one
process. Only certain layouts are standard RAID 10 with the rest being proprietary. See the Linux MD RAID 10 section in the Non-
standard RAID article for details.
Raid 0+3 and 3+0

RAID 0+3

Diagram of a 0+3 array

RAID level 0+3 or RAID level 03 is a dedicated parity array across striped disks. Each block of data at the RAID 3 level is
broken up amongst RAID 0 arrays where the smaller pieces are striped across disks.

RAID 30

RAID level 30 is also known as striping of dedicated parity arrays. It is a combination of RAID level 3 and RAID level 0. RAID 30
provides high data transfer rates, combined with high data reliability. RAID 30 is best implemented on two RAID 3 disk arrays with
data striped across both disk arrays. RAID 30 breaks up data into smaller blocks, and then stripes the blocks of data to each RAID
3 raid set. RAID 3 breaks up data into smaller blocks, calculates parity by performing an Exclusive OR on the blocks, and then
writes the blocks to all but one drive in the array. The parity bit created using the Exclusive OR is then written to the last drive in
each RAID 3 array. The size of each block is determined by the stripe size parameter, which is set when the RAID is created.

Advantages

One drive from each of the underlying RAID 3 sets can fail. Until the failed drives are replaced the other drives in the sets that
suffered such a failure are a single point of failure for the entire RAID 30 array. In other words, if one of those drives fails, all data
stored in the entire array is lost. The time spent in recovery (detecting and responding to a drive failure, and the rebuild process to
the newly inserted drive) represents a period of vulnerability to the RAID set.

Offers highest level of redundancy and performance

Disadvantages

Very costly to implement

/------/------/------/------> RAID CONTROLLER <------\-------\------\------\


| | | | | | | |
disk1 disk2 disk3 disk4 disk5 disk6 disk7 disk 8
| | | | | | | |
A1 A2 A3 P A4 A5 A6 P1
| | | | | | | |
A7 A8 A9 P A10 A11 A12 P1
| | | | | | | |
A13 A14 A15 P A16 A17 A18 P1
-------RAID 3--------- ----------RAID 3---------
----------------------- RAID 0 ----------------------------

RAID 100 (RAID 10+0)

A RAID 100, sometimes also called RAID 10+0, is a stripe of RAID 10s. RAID 100 is an example of plaid RAID, a RAID in which
striped RAIDs are themselves striped together. Below is an example in which two sets of four 120 GB RAID 1 arrays are striped and
re-striped to make 480 GB of total storage space:

RAID 0
.-------------------------------------.
| |
RAID 0 RAID 0
.-----------------. .-----------------.
| | | |
RAID 1 RAID 1 RAID 1 RAID 1
.--------. .--------. .--------. .--------.
| | | | | | | |
120 GB 120 GB 120 GB 120 GB 120 GB 120 GB 120 GB 120 GB
A1 A1 A2 A2 A3 A3 A4 A4
A5 A5 A6 A6 A7 A7 A8 A8
B1 B1 B2 B2 B3 B3 B4 B4
B5 B5 B6 B6 B7 B7 B8 B8
Note: A1, B1, et cetera each represent one data sector; each column represents one disk.

All but one drive from each RAID 1 set could fail without loss of data. However, the remaining disk from the RAID 1 becomes a
single point of failure for the already degraded array. Often the top level stripe is done in software. Some vendors call the top level
stripe a MetaLun (Logical Unit Number (LUN)), or a Soft Stripe.

The major benefits of RAID 100 (and plaid RAID in general) over single-level RAID are better random read performance and the
mitigation of hotspot risk on the array. For these reasons, RAID 100 is often the best choice for very large databases, where the
underlying array software limits the amount of physical disks allowed in each standard array. Implementing nested RAID levels
allows virtually limitless spindle counts in a single logical volume.

RAID 50 (RAID 5+0)

A RAID 50 combines the straight block-level striping of RAID 0 with the distributed parity of RAID 5. This is a RAID 0 array striped
across RAID 5 elements.

Below is an example where three collections of 240 GB RAID 5s are striped together to make 720 GB of total storage space:

RAID 0
.-----------------------------------------------------.
| | |
RAID 5 RAID 5 RAID 5
.-----------------. .-----------------. .-----------------.
| | | | | | | | |
120 GB 120 GB 120 GB 120 GB 120 GB 120 GB 120 GB 120 GB 120 GB
A1 A2 Ap A3 A4 Ap A5 A6 Ap
B1 Bp B2 B3 Bp B4 B5 Bp B6
Cp C1 C2 Cp C3 C4 Cp C5 C6
D1 D2 Dp D3 D4 Dp D5 D6 Dp
Note: A1, B1, et cetera each represent one data block; each column represents one disk; Ap, Bp,
et cetera each represent parity information for each distinct RAID 5 and may represent different
values across the RAID 5 (that is, Ap for A1 and A2 can differ from Ap for A3 and A4).

One drive from each of the RAID 5 sets could fail without loss of data. However, if the failed drive is not replaced, the remaining
drives in that set then become a single point of failure for the entire array. If one of those drives fails, all data stored in the entire
array is lost. The time spent in recovery (detecting and responding to a drive failure, and the rebuild process to the newly inserted
drive) represents a period of vulnerability to the RAID set.

In the example below, datasets may be striped across both RAID sets. A dataset with 5 blocks would have 3 blocks written to the
first RAID set, and the next 2 blocks written to RAID set 2.

RAID Set 1 RAID Set 2


A1 A2 A3 Ap A4 A5 A6 Ap
B1 B2 Bp B3 B4 B5 Bp B6
C1 Cp C2 C3 C4 Cp C5 C6
Dp D1 D2 D3 Dp D4 D5 D6

The configuration of the RAID sets will impact the overall fault tolerance. A construction of three seven-drive RAID 5 sets has
higher capacity and storage efficiency, but can only tolerate three maximum potential drive failures. Because the reliability of the
system depends on quick replacement of the bad drive so the array can rebuild, it is common to construct three six-drive RAID5
sets each with a hot spare that can immediately start rebuilding the array on failure. This does not address the issue that the array
is put under maximum strain reading every bit to rebuild the array precisely at the time when it is most vulnerable. A construction
of seven three-drive RAID 5 sets can handle as many as seven drive failures but has lower capacity and storage efficiency.
RAID 50 improves upon the performance of RAID 5 particularly during writes[citation needed], and provides better fault tolerance than a
single RAID level does. This level is recommended for applications that require high fault tolerance, capacity and random
positioning performance.

As the number of drives in a RAID set increases, and the capacity of the drives increase, this impacts the fault-recovery time
correspondingly as the interval for rebuilding the RAID set increases.

RAID 60 (RAID 6+0)

A RAID 60 combines the straight block-level striping of RAID 0 with the distributed double parity of RAID 6. That is, a RAID 0 array
striped across RAID 6 elements.It requires at least 8 disks.

Below is an example where two collections of 240 GB RAID 6s are striped together to make 480 GB of total storage space:

RAID 0
.------------------------------------.
| |
RAID 6 RAID 6
.--------------------------. .--------------------------.
| | | | | | | |
120 GB 120 GB 120 GB 120 GB 120 GB 120 GB 120 GB 120 GB
A1 A2 Aq Ap A3 A4 Aq Ap
B1 Bq Bp B2 B3 Bq Bp B4
Cq Cp C1 C2 Cq Cp C3 C4
Dp D1 D2 Dq Dp D3 D4 Dq

As it is based on RAID 6, two disks from each of the RAID 6 sets could fail without loss of data.Also failures while a single disk is
rebuilding in one RAID 6 set will not lead to data loss.RAID 60 has improved fault tolerance and it is quite impossible to lose data
as more than half of the disks must fail in the above example in order to lose data.

Striping helps to increase capacity and performance without adding disks to each RAID 6 set (which would decrease data
availability and could impact performance).RAID 60 improves upon the performance of RAID 6.Despite that RAID 60 is slightly
worse than RAID 50 in terms of writes due to the added overhead of more parity calculations, but may be slightly faster in random
reads due to the spreading of data over at least one more disk per RAID 6 set.When data security is concerned this performance
drop is negligible.

Non-standard RAID levels

Although all implementations of RAID differ from the idealized specification to some extent, some companies have developed Non-
standard RAID implementations that differ substantially from the rest of the crowd. Most of these are proprietary. Below is a
detailed description of the most common custom, specialized arrays that have been claimed by various organizations

Contents

• 1 Double parity
• 2 DVRAID
• 3 RAID 1.5
• 4 RAID 7
• 5 RAID S or Parity RAID
• 6 Matrix RAID
• 7 Linux MD RAID 10
• 8 IBM ServeRAID 1E
• 9 RAID-Z
• 10 See Also

• 11 Notes

Double parity
Diagram of a RAID DP (Double Parity) setup.

One common addition to the existing RAID levels is double parity, sometimes implemented and known as diagonal parity[1]. As
in RAID 6, there are two sets of parity check information created. Unlike RAID 6, the second set is not another set of points in the
overdefined polynomial which characterizes the data. Rather, double parity calculates the extra parity against a different group of
blocks. For example, in our graph both RAID 5 and RAID 6 calculate against all A-lettered blocks to produce one or more parity
blocks. However, as it is fairly easy to calculate parity against multiple groups of blocks, instead of just A-lettered blocks, one can
calculate all A-lettered blocks and a permuted group of blocks.

This is more easily illustrated using RAID 4, Twin Syndrome RAID 4 (RAID 6 with a RAID 4 layout which is not actually
implemented), and double parity RAID 4.

Traditional Twin Syndrome Double parity


RAID 4 RAID 4 RAID 4
A1 A2 A3 Ap A1 A2 A3 Ap Aq A1 A2 A3 Ap 1n
B1 B2 B3 Bp B1 B2 B3 Bp Bq B1 B2 B3 Bp 2n
C1 C2 C3 Cp C1 C2 C3 Cp Cq C1 C2 C3 Cp 3n
D1 D2 D3 Dp D1 D2 D3 Dp Dq D1 D2 D3 Dp 4n
Note: A1, B1, et cetera each represent one data block; each column represents one disk.

The n blocks are the double parity blocks. The block 2n would be calculated as A2 xor B3 xor Cp, while 3n would be calculated as
A3 xor Bp xor C1 and 1n would be calculated as A1 xor B2 xor C3. Because the double parity blocks are correctly distributed it is
possible to reconstruct two lost data disks through iterative recovery. For example, B2 could be recovered without the use of any
x1 or x2 blocks by computing B3 xor Cp xor 2n = A2, and then A1 can be recovered by A2 xor A3 xor Ap. Finally, B2 = A1 xor C3
xor 1n.

Running in degraded mode with a double parity system is not advised.

DVRAID

DVRAID is a proprietary RAID level created by ATTO Technology. This RAID was created for the benefit of the Digital Video and
Audio markets (DVA). Much of the market uses unprotected storage (RAID 0) because of speed issues when editing High Definition
video. While RAID 0 provides great performance, it does not provide protection from a disk failure. If a video editor loses a single
drive under RAID 0, all of the edits and possibly even the source material is gone forever. ATTO created a modified parity protected
RAID using scatter gather and caching technologies which allow those who are used to using RAID 0 for editing to enjoy the same
performance while adding parity protection to that storage. DVRAID provides the performance to support multiple streams of high
definition video plus alpha channel titles as well as featuring latency support for audio editing applications.

RAID 1.5

RAID 1.5 is a proprietary RAID by HighPoint and is sometimes incorrectly called RAID 15. From the limited information available it
appears that it's just a correct implementation of RAID 1. When reading, the data is read from both disks simultaneously and most
of the work is done in hardware instead of the driver.

RAID 7

RAID 7 is a trademark of Storage Computer Corporation. It adds caching to RAID 3 or RAID 4 to improve performance.[citation needed]

RAID S or Parity RAID

RAID S is EMC Corporation's proprietary striped parity RAID system used in their Symmetrix storage systems. Each volume exists
on a single physical disk, and multiple volumes are arbitrarily combined for parity purposes. EMC originally referred to this
capability as RAID S, and then renamed it Parity RAID for the Symmetrix DMX platform. EMC now offers standard striped RAID 5
on the Symmetrix DMX as well.
Traditional EMC
RAID 5 RAID S
A1 A2 A3 Ap A1 B1 C1 1p
B1 B2 Bp B3 A2 B2 C2 2p
C1 Cp C2 C3 A3 B3 C3 3p
Dp D1 D2 D3 A4 B4 C4 4p
Note: A1, B1, et cetera each represent one data block; each column represents one disk.
A, B, et cetera are entire volumes.

Matrix RAID

Diagram of a Matrix RAID setup.

Matrix RAID is a feature that first appeared in the Intel ICH6R RAID BIOS. It is not a new RAID level. Matrix RAID utilizes two
physical disks. Part of each disk is assigned to a level 0 array, the other part to a level 1 array. Currently, most (all?) of the other
cheap RAID BIOS products only allow one disk to participate in a single array. This product targets home users, providing a safe
area (the level 1 section) for documents and other items that one wishes to store redundantly, and a faster area for operating
system, applications, etc.

Linux MD RAID 10

The Linux kernel software RAID driver (called md, for "multiple disk") can be used to build a classic RAID 1+0 array, but also has a
single level [1] (from version 2.6.9 and onwards) with some interesting extensions.

It supports a near layout where each chunk is repeated n times in k-way stripe array. For example an n2 layout on 3 drives and 4
drives would look like:

A1 A1 A2 A1 A1 A2 A2
A2 A3 A3 A3 A3 A4 A4
A4 A4 A5 A5 A5 A6 A6
A5 A6 A6 A7 A7 A8 A8
.. .. .. .. .. .. ..

This is equivalent to the standard RAID 10 arrangement when n divides k evenly and 2 <= n < k (the four drive example).

The driver also supports a far layout where all the drives are divided into f sections. All the chunks are repeated in each section but
offset by one devices. For example an f2 layout on 3 drives would look like:

A1 A2 A3
A4 A5 A6
A7 A8 A9
.. .. ..
A3 A1 A2
A6 A4 A5
A9 A7 A8
.. .. ..

The near and far options can both be used at the same time. The chunks in each section are offset by n devices. For example n 2f2
layout on 4 drives:

A1 A1 A2 A2
A3 A3 A4 A4
A5 A5 A6 A6
A7 A7 A8 A8
.. .. .. ..
A2 A2 A1 A1
A4 A4 A3 A3
A6 A6 A5 A5
A8 A8 A7 A7
.. .. .. ..

As of Linux 2.6.18 the driver also supports an offset layout where each stripe is repeated o times. For example an o2 layout on 3
drives:

A1 A2 A3
A3 A1 A2
A4 A5 A6
A6 A4 A5
A7 A8 A9
A9 A7 A8
.. .. ..

Note: k is the number of drives, n#, f# and o# are parameters in the mdadm --layout option.

Linux can also create other standard RAID configurations using the md driver (0, 1, 4, 5, 6) as well as non-raid uses like multipath
and LVM2. This md driver should not be confused with the dm driver, which is for IDE/ATA chipset based software raid (i.e.,
fakeraid).

IBM ServeRAID 1E

Diagram of a RAID 1E setup.

The IBM ServeRAID adapter series supports 2-way mirroring on an arbitrary number of drives.

This configuration is tolerant of non-adjacent drives failing. Other storage systems including Sun's StorEdge T3 support this mode
as well.

RAID-Z

Sun's ZFS implements an integrated redundancy scheme similar to RAID 5 which it calls RAID-Z. RAID-Z avoids the RAID 5 "write
hole" [2] by its copy-on-write policy: rather than overwriting old data with new data, it writes new data to a new location and then
atomically overwrites the pointer to the old data. It avoids the need for read-modify-write operations for small writes by only ever
performing full-stripe writes; small blocks are mirrored instead of parity protected, which is possible because the file system is
aware of the underlying storage structure and can allocate extra space if necessary. There is also RAID-Z2 which uses two forms of
parity to achieve similar results as RAID 6: the ability to lose up to two drives without losing data [3].

You might also like