Professional Documents
Culture Documents
1. What is RAID:
The full name of the disk array is "Redundant Arrays of Inexpensive Disks, RAID", which means: fault-
tolerant and inexpensive disk array.
RAID can integrate multiple smaller disks into a larger disk device through a technology (software or
hardware); this larger disk function can not only be stored, but also has the function of data
protection.
The entire RAID has different functions due to the different levels selected, so that the integrated
disk has different functions
RAID-5 requires at least three disks to form this type of disk array.
The data writing of this kind of disk array is a bit similar to RAID-0, but during each cycle of
writing (striping), a parity check data (Parity) is added to each disk, this data will record the
backup data of other disks , Used for rescue when a disk is damaged.
How RAID5 works:
When each cycle is written, part of the parity check code (parity) will be recorded, and the
recorded parity check code is recorded on a different disk each time, therefore, any one disk
can be borrowed from other disks Check the code to reconstruct the data on the original
disk! However, it should be noted that due to the parity check code, the total capacity of
RAID 5 will be one less than the total number of disks.
The original 3 disks will only have (3-1) = 2 disk capacity.
When the number of damaged disks is greater than or equal to two, the entire set of RAID 5
data is destroyed. Because RAID 5 can only support the damage of one disk by default.
RAID 6 can support two disks damaged
The software disk array mainly simulates the task of the array through software, so it will
consume a lot of system resources, such as CPU operations and I/O bus resources. However,
our personal computer is very fast now, so the previous speed limit no longer exists!
The software disk array provided by our CentOS is mdadm. This software will use partition or
disk as the unit of the disk. That is to say, you don’t need more than two disks, as long as
there are more than two partitions. You can design your disk array.
In addition, mdadm supports the RAID0/RAID1/RAID5/spare disk and so on we just
mentioned! And the management mechanism provided can also achieve a function similar to
hot plugging, and the partition can be replaced online (the file system is normally used)! It is
also very convenient to use!
Third, the configuration of the software disk array:
there are so many nagging, let's configure the software disk array:
approximate steps:
#依照此上命令创建四个分区
# 保存退出
2. Create
Version: 1.2
Creation Time: Thu Nov 7 20:26:03 2019 # Creation time
Raid Level: raid5 # RAID level
Array Size: 3142656 (3.00 GiB 3.22 GB) # Available capacity of the entire RAID set
Used Dev Size: 1047552 (1023.00 MiB 1072.69 MB) # The capacity of each disk
Raid Devices: 4 # Number of disks forming RAID
Total Devices: 5 # Total disks including spare
Persistence: Superblock is persistent
Update Time: Thu Nov 7 20:26:08 2019
State: clean # Current usage status of this disk array
Active Devices: 4 # Number of activated devices
Working Devices: 5 # The number of devices currently used in this array
Failed Devices: 0 # Number of damaged devices
Spare Devices: 1 # Number of reserved disks
Layout: left-symmetric
Chunk Size: 256K # This is the chunk size of chunk
The first line part: point out that md0 is raid5, and uses four disk devices such as sdb1, sdb2,
sdb3, sdb4, and so on. The number in square brackets [] behind each device is the order of
the disk in the RAID (RaidDevice); as for [S] after sdb5, it means that sdb5 is spare.
The second line part: this disk array has 3142656 blocks (each block unit is 1K), so the total
capacity is about 3GB, using RAID 5 level, the size of the chunk written to the disk is 256K,
using algorithm 2 disk Array algorithm. [m/n] means that the array requires m devices, and n
devices are operating normally. Therefore, this md0 requires 4 devices and all 4 devices are
operating normally. The following [UUUU] represents the startup of the four required
devices (that is, the m in [m/n]), U represents normal operation, and _ represents
abnormality.
[root@raid5 /]# mkfs.xfs -f -d su=256k,sw=3 -r extsize=768k /dev/md0 # Please note that this
format is md0
meta-data=/dev/md0 isize=512 agcount=8, agsize=98176 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=785408, imaxpct=25
= sunit=128 swidth=384 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=1572864 blocks=0, rtextents=0
[root@raid5 /]# mkdir /srv/raid
[root@raid5 /]# mount /dev/md0 /srv/raid/
[root@raid5 /]# df -TH /srv/raid/ # Seeing that we have mounted successfully
Filesystem Type Size Used Avail Use% Mounted on
/dev/md0 xfs 3.3G 34M 3.2G 2% /srv/raid
[root@raid5 /]# cp -a /var/log/ /srv/raid/ # Copy some data to the mount point first
[root@raid5 /]# df -TH /srv/raid/; du -sm /srv/raid/* # See that there is already data in it
Filesystem Type Size Used Avail Use% Mounted on
/dev/md0 xfs 3.3G 39M 3.2G 2% /srv/raid
5 /srv/raid/log
[root@raid5 /]# mdadm --manage /dev/md0 --fail /dev/sdb3
mdadm: set /dev/sdb3 faulty in /dev/md0 # shows that it has become a faulty device
........................ // Partially omitted
Update Time: Thu Nov 7 20:55:31 2019
State: clean
Active Devices: 4
Working Devices: 4
Failed Devices: 1 # A disk error occurred
Spare Devices: 0 # The preparation here has become 0, indicating that the job has been
replaced, and the interception is a bit slow here, otherwise it is still 1
............................ // Omit part of the content
The
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 18 1 active sync /dev/sdb2
4 8 21 2 active sync /dev/sdb5 # Here you can see that sdb5 has replaced the work
5 8 20 3 active sync /dev/sdb4
[root@raid5 /]# mdadm --manage /dev/md0 --remove /dev/sdb3 # Simulate unplugging the
old disk
mdadm: hot removed /dev/sdb3 from /dev/md0
[root@raid5 /]# mdadm --manage /dev/md0 --add /dev/sdb3 # insert a new disk
mdadm: added /dev/sdb3
6 8 19-spare /dev/sdb3 # We will find that sdb3 is already waiting as a spare disk
5. Set the RAID to automatically boot at boot and automatically mount
https://blog.csdn.net/weixin_45116475/article/details/102971849