You are on page 1of 10

Centos 7 RAID 5 detailed explanation and configuration

1. What is RAID:
The full name of the disk array is "Redundant Arrays of Inexpensive Disks, RAID", which means: fault-
tolerant and inexpensive disk array.
RAID can integrate multiple smaller disks into a larger disk device through a technology (software or
hardware); this larger disk function can not only be stored, but also has the function of data
protection.
The entire RAID has different functions due to the different levels selected, so that the integrated
disk has different functions

1. RAID-0 (equivalent mode, stripe): best performance


This model is better if it is composed of disks of the same model and capacity.
In this mode of RAID, the disk will be cut into equal blocks (named chunk, which can generally be set
between 4K and 1M), and then when a file is to be written to RAID, the file will be cut according to
the size of the chunk , And then put it in each disk in order.
Because each disk will store data interleaved, when your data is written to RAID, the data will be
placed on each disk in equal amounts.

2, RAID-1 (mapping mode, mirror)


This mode also requires the same disk capacity, preferably the same disk!
If disks with different capacities form RAID-1, the total capacity will be dominated by the
smallest disk! This mode is mainly to "let the same data be completely stored on two disks"
For example, if I have a 100MB file and I only have two disks to form RAID-1, then the two
disks will write 100MB to their storage space synchronously. Therefore, the overall RAID
capacity is almost 50% less. Since the contents of the two hard drives are exactly the same,
as if reflected in a mirror, we also call him mirror mode,
3. RAID 1+0, RAID 0+1
The performance of RAID-0 is good but the data is insecure, and the data of RAID-1 is safe
but the performance is not good, so can these two be combined to set up RAID?
RAID 1+0 is:
(1) Let two disks make up RAID 1, and there are two sets of such settings;
(2) Combine these two sets of RAID 1 into another set of RAID 0. This is RAID 1+0
RAID 0+1 means:
(1) Let two disks form RAID 0, and there are two sets of such settings;
(2) Combine these two sets of RAID 0 into another set of RAID 1. This is RAID 0+1

4. RAID5: balanced consideration of performance and data backup (emphasis)

RAID-5 requires at least three disks to form this type of disk array.
The data writing of this kind of disk array is a bit similar to RAID-0, but during each cycle of
writing (striping), a parity check data (Parity) is added to each disk, this data will record the
backup data of other disks , Used for rescue when a disk is damaged.
How RAID5 works:

When each cycle is written, part of the parity check code (parity) will be recorded, and the
recorded parity check code is recorded on a different disk each time, therefore, any one disk
can be borrowed from other disks Check the code to reconstruct the data on the original
disk! However, it should be noted that due to the parity check code, the total capacity of
RAID 5 will be one less than the total number of disks.
The original 3 disks will only have (3-1) = 2 disk capacity.
When the number of damaged disks is greater than or equal to two, the entire set of RAID 5
data is destroyed. Because RAID 5 can only support the damage of one disk by default.
RAID 6 can support two disks damaged

SPare Disk (reserved disk function):


In order to allow the system to rebuild in real time when the hard disk is damaged, you need
the assistance of spare disk. The so-called spare disk is one or more disks not included in the
original disk array level. This disk is not usually used by the disk array. When any disk of the
disk array is damaged, the spare disk will be used. Actively pull into the disk array and move
the broken hard disk out of the disk array! Then immediately rebuild the data system.
The advantages of the disk array:
Data security and reliability: it does not refer to network information security, but whether
the data can be safely rescued or used when the hardware (referring to the disk) is damaged;
Read and write performance: For example, RAID 0 can enhance read and write performance,
so that your system I/O part can be improved;
Capacity: Multiple disks can be combined, so a single file system can have considerable
capacity.

Two, Software, Hardware RAID:


.
Why the disk array is divided into hardware and software?
The so-called hardware RAID (hardware RAID) is to achieve the purpose of the array through
the disk array card. There is a special chip on the disk array card to handle the RAID task, so it
will be better in performance. In many tasks (such as RAID 5 parity check code calculation),
the disk array does not repeatedly consume the original system's I/O bus, and theoretically
the performance will be better. In addition, the current mid-to-high-end disk array cards
support hot swapping, that is, replacing damaged disks without shutting down, which is very
useful for system recovery and data reliability.

The software disk array mainly simulates the task of the array through software, so it will
consume a lot of system resources, such as CPU operations and I/O bus resources. However,
our personal computer is very fast now, so the previous speed limit no longer exists!
The software disk array provided by our CentOS is mdadm. This software will use partition or
disk as the unit of the disk. That is to say, you don’t need more than two disks, as long as
there are more than two partitions. You can design your disk array.
In addition, mdadm supports the RAID0/RAID1/RAID5/spare disk and so on we just
mentioned! And the management mechanism provided can also achieve a function similar to
hot plugging, and the partition can be replaced online (the file system is normally used)! It is
also very convenient to use!
Third, the configuration of the software disk array:
there are so many nagging, let's configure the software disk array:
approximate steps:

Use 4 partitions to form RAID 5;


 Each partition is about 1GB in size, you need to make sure that each partition is as
large as possible;
 Use 1 partition to set the spare disk chunk to 256K.
 The size of this spare disk is as large as the partition required by other RAID!
 Mount this RAID 5 device to the /srv/raid directory
Start configuration:
1. Partition

[root@raid5 /]# gdisk /dev/sdb # 通过 gdisk 命令创建分区,也可使用 fdisk


Command (? for help): n # 添加一个新分区

Partition number (1-128, default 1): 1 # 分区号为 1


First sector (34-41943006, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-41943006, default = 41943006) or {+-}size{KMGTP}: +1G # 大小
为 1G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): # GUID 号
Changed type of partition to 'Linux filesystem'

#依照此上命令创建四个分区

Command (? for help): P # 查看创建好的分区


.......................// 省略部分

Number Start (sector) End (sector) Size Code Name


1 2048 2099199 1024.0 MiB 8300 Linux filesystem
2 2099200 4196351 1024.0 MiB 8300 Linux filesystem
3 4196352 6293503 1024.0 MiB 8300 Linux filesystem
4 6293504 8390655 1024.0 MiB 8300 Linux filesystem
5 8390656 10487807 1024.0 MiB 8300 Linux filesystem

# 保存退出

[root@raid5 /]# lsblk # 查看磁盘列表


NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part
├─cl-root 253:0 0 50G 0 lvm /
├─cl-swap 253:1 0 2G 0 lvm [SWAP]
└─cl-home 253:2 0 47G 0 lvm /home

sdb 8:16 0 20G 0 disk # 看到我们 sdb 磁盘上已经分出了四个分区


├─sdb1 8:17 0 1G 0 part
├─sdb2 8:18 0 1G 0 part
├─sdb3 8:19 0 1G 0 part
└─sdb4 8:20 0 1G 0 part

└─sdb5 8:21 0 1G 0 part # 第五个是预留磁盘


sr0 11:0 1 1024M 0 rom

2. Create

[root@raid5 /]# mdadm --create /dev/md0 --auto=yes --level=5 --chunk=256K --raid-


devices=4 --spare-devices=1 /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdb5
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
--create: option for creating RAID
--auto=yes: Decide to create the following software disk array device, that is, md [0-9]
--chunk=256K: Determine the chunk size of this device, also can be regarded as stripe size,
generally 64K or 512K
--raid-devices=4: Use several disks or partitions as devices for the disk array
--spare-devices=1: Use several disks or partitions as spare devices
--level=5: set the level of this set of disk arrays, it is recommended to use only 0, 1, 5
--detail: Detailed information of the disk array device connected later
[root@raid5 /]# mdadm --detail /dev/md0

/dev/md0: # RAID device file name

Version: 1.2
Creation Time: Thu Nov 7 20:26:03 2019 # Creation time
Raid Level: raid5 # RAID level
Array Size: 3142656 (3.00 GiB 3.22 GB) # Available capacity of the entire RAID set
Used Dev Size: 1047552 (1023.00 MiB 1072.69 MB) # The capacity of each disk
Raid Devices: 4 # Number of disks forming RAID
Total Devices: 5 # Total disks including spare
Persistence: Superblock is persistent
Update Time: Thu Nov 7 20:26:08 2019
State: clean # Current usage status of this disk array
Active Devices: 4 # Number of activated devices
Working Devices: 5 # The number of devices currently used in this array
Failed Devices: 0 # Number of damaged devices
Spare Devices: 1 # Number of reserved disks

Layout: left-symmetric
Chunk Size: 256K # This is the chunk size of chunk

Name: raid5:0 (local to host raid5)


UUID: facfa60d:c92b4ced:3f519b65:d135fd98
Events: 18

Number Major Minor RaidDevice State


0 8 17 0 active sync /dev/sdb1
1 8 18 1 active sync /dev/sdb2
2 8 19 2 active sync /dev/sdb3
5 8 20 3 active sync /dev/sdb4

4 8 21-spare /dev/sdb5 # see sdb5 as a spare device in the waiting area


# The last five lines are the current status of these five devices. RaidDevice refers to the disk
order within this Raid
[root@raid5 /]# cat /proc/mdstat
Personalities: [raid6] [raid5] [raid4]
md0: active raid5 sdb4[5] sdb5[4](S) sdb3[2] sdb2[1] sdb1[0] # first line
3142656 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] # second line

unused devices: <none>

The first line part: point out that md0 is raid5, and uses four disk devices such as sdb1, sdb2,
sdb3, sdb4, and so on. The number in square brackets [] behind each device is the order of
the disk in the RAID (RaidDevice); as for [S] after sdb5, it means that sdb5 is spare.
The second line part: this disk array has 3142656 blocks (each block unit is 1K), so the total
capacity is about 3GB, using RAID 5 level, the size of the chunk written to the disk is 256K,
using algorithm 2 disk Array algorithm. [m/n] means that the array requires m devices, and n
devices are operating normally. Therefore, this md0 requires 4 devices and all 4 devices are
operating normally. The following [UUUU] represents the startup of the four required
devices (that is, the m in [m/n]), U represents normal operation, and _ represents
abnormality.

3. Format and mount for use

[root@raid5 /]# mkfs.xfs -f -d su=256k,sw=3 -r extsize=768k /dev/md0 # Please note that this
format is md0
meta-data=/dev/md0 isize=512 agcount=8, agsize=98176 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=785408, imaxpct=25
= sunit=128 swidth=384 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=1572864 blocks=0, rtextents=0
[root@raid5 /]# mkdir /srv/raid
[root@raid5 /]# mount /dev/md0 /srv/raid/
[root@raid5 /]# df -TH /srv/raid/ # Seeing that we have mounted successfully
Filesystem Type Size Used Avail Use% Mounted on
/dev/md0 xfs 3.3G 34M 3.2G 2% /srv/raid

4. Rescue of emulated RAID errors


As the saying goes, "there is an unpredictable situation in the sky, and there is a blessing for
everyone", no one knows when the devices in your disk array will go wrong, so it is necessary
to understand the rescue of the software disk array! Let's mimic RAID errors and rescue
them.

[root@raid5 /]# cp -a /var/log/ /srv/raid/ # Copy some data to the mount point first
[root@raid5 /]# df -TH /srv/raid/; du -sm /srv/raid/* # See that there is already data in it
Filesystem Type Size Used Avail Use% Mounted on
/dev/md0 xfs 3.3G 39M 3.2G 2% /srv/raid
5 /srv/raid/log
[root@raid5 /]# mdadm --manage /dev/md0 --fail /dev/sdb3
mdadm: set /dev/sdb3 faulty in /dev/md0 # shows that it has become a faulty device
........................ // Partially omitted
Update Time: Thu Nov 7 20:55:31 2019
State: clean
Active Devices: 4
Working Devices: 4
Failed Devices: 1 # A disk error occurred
Spare Devices: 0 # The preparation here has become 0, indicating that the job has been
replaced, and the interception is a bit slow here, otherwise it is still 1
............................ // Omit part of the content
The
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 18 1 active sync /dev/sdb2
4 8 21 2 active sync /dev/sdb5 # Here you can see that sdb5 has replaced the work
5 8 20 3 active sync /dev/sdb4

2 8 19-faulty /dev/sdb3 # sdb3 is dead


Then you can unplug the bad disk and replace it with a new one

[root@raid5 /]# mdadm --manage /dev/md0 --remove /dev/sdb3 # Simulate unplugging the
old disk
mdadm: hot removed /dev/sdb3 from /dev/md0
[root@raid5 /]# mdadm --manage /dev/md0 --add /dev/sdb3 # insert a new disk
mdadm: added /dev/sdb3

[root@raid5 /]# mdadm --detail /dev/md0 # View


........................... // Partially omitted
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 18 1 active sync /dev/sdb2
4 8 21 2 active sync /dev/sdb5
5 8 20 3 active sync /dev/sdb4

6 8 19-spare /dev/sdb3 # We will find that sdb3 is already waiting as a spare disk
5. Set the RAID to automatically boot at boot and automatically mount

[root@raid5 /]# mdadm --detail /dev/md0 | grep -i uuid


UUID: facfa60d:c92b4ced:3f519b65:d135fd98
[root@raid5 /]# vim /etc/mdadm.conf
ARRAY /dev/md0 UUID=facfa60d:c92b4ced:3f519b65:d135fd98
# RAID device ID code content
[root@raid5 /]# blkid /dev/md0
/dev/md0: UUID="bc2a589c-7df0-453c-b971-1c2c74c39075" TYPE="xfs"
[root@raid5 /]# vim /etc/fstab # Set automatic mounting at boot
............................ // Omit part of the content
/dev/md0 /srv/raid xfs defaults 0 0
#The beginning can also be filled in as UUID
[root@raid5 /]# df -Th /srv/raid/ # can be restarted
Filesystem Type Size Used Avail Use% Mounted on
/dev/md0 xfs 3.0G 37M 3.0G 2% /srv/raid

https://blog.csdn.net/weixin_45116475/article/details/102971849

You might also like