You are on page 1of 9

SUSE Linux Enterprise Server, software

RAID1 plus GRUB drive replacement


procedure.

Approved by: Enea Buza


Horizon
Date: July 22, 2013
1. Check /proc/mdstat for errors:

rvb@boss2 :~> cat /proc/mdstat

Personalities: [raid1] [raid0] [raid5] [raid4] [linear]

md1 : active raid1 sda2[2](F) sdb2[1]

479989952 blocks [2/1] [_U]

md0 : active raid1 sda1[2](F) sdb1[1]

8393856 blocks [2/1] [_U]

unused devices: <none>

Implies /dev/sdb has failed (confirmed by looking in messages file).

2. Fail and remove /dev/sdb1:

# mdadm --manage /dev/md1 --fail /dev/sda2

mdadm: set /dev/sda2 faulty in /dev/md1

# mdadm --manage /dev/md1 --remove /dev/sda2

mdadm: hot removed /dev/sda2

3. Repeat for /dev/sda1:

# mdadm --manage /dev/md0 --fail /dev/sda1

mdadm: set /dev/sda1 faulty in /dev/md0

# mdadm --manage /dev/md0 --remove /dev/sda1

mdadm: hot removed /dev/sda1

/proc/mdstat should now look like:

# cat /proc/mdstat
Personalities : [raid1]

md0 : active raid1 sda1[0]

479989952 blocks [2/1] [_U]

md1 : active raid1 sda3[0]

8393856 blocks [2/1] [_U]

unused devices: <none>

4. Shutdown machine.
5. Replace /dev/sda and bring the machine back up.
6. Copy partitioning structure from /dev/sdb to /dev/sda:

# sfdisk -d /dev/sdb | sfdisk /dev/sda

7. Add the partitions on /dev/sdb to the RAID mirrors:

# mdadm --manage /dev/md0 --add /dev/sda2

# mdadm --manage /dev/md1 --add /dev/sda1

The partitions started to mirror. /dev/md0 mirrored OK but /dev/md1 did not. So we embark
on the partition resizing…

8. Check how much you need to reduce the size of the partitions by with fdisk -l:

# fdisk -l /dev/sd?
9. We check that there is free space in the PV with pvdisplay:

# pvdisplay

--- Physical volume ---

PV Name /dev/md1

VG Name rootvg

PV Size

Allocatable yes

PE Size (KByte)

Total PE

Free PE

Allocated PE

PV UUID

If we want that in gigabytes, look at the free size reported by vgdisplay:

# vgdisplay vg_name | grep size

VG Size

PE Size

Alloc PE / Size

Free PE / Size

10. There is free space so we can start by resizing the LVM PV (/dev/md1 in this case).:

# pvresize --setphysicalvolumesize ?G /dev/md1

Physical volume "/dev/md1" changed

1 physical volume(s) resized / 0 physical volume(s) not resized


11. We can check it worked with pvdisplay:

# pvdisplay /dev/md1 | grep Size

PV Size GB / not usable MB

12. The next step is to reduce the size of the underlying RAID1 mirror. First, we Check the size:

# mdadm -D /dev/md1 | grep Size

Array Size :

Used Dev Size :

. Despitethe fact we are shrinking the device, we still use the --grow switch:

# mdadm --grow /dev/md1 --size=

Check the change has worked:

# mdadm -D /dev/md1 | grep Size

Array Size :

Used Dev Size :

13. Now we can repartition /dev/sda3

# fdisk /dev/sda

.
Syncing disks.

14. Reboot the machine.


15. Unfortunately, this does not make everything magically work and more work is required. The
root device is now /dev/sda3 rather than /dev/md1. Only /dev/md0 is still a RAID device:

16. # cat /proc/mdstat

17. Personalities : [raid1]

18. md0 : active raid1 sda1[0]

19.

20.

21. unused devices: <none>

22. We can re-mirror /boot (/dev/md0):

# mdadm --manage /dev/md0 --add /dev/sdb

mdadm: added /dev/sdb1

After a few seconds, the small /boot partition is mirrored:

# cat /proc/mdstat

Personalities : [raid1]

md0 : active raid1 sdb1[1] sda1[0]

513984 blocks [2/2] [UU]

unused devices: <none>

23. Create new RAID1 array with just /dev/sdb3 in it:

# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda2 missing

mdadm: array /dev/md1 started.


24. Make /dev/md1 an LVM PV and add it to the same volume group as /dev/sda3 (rootvg):

# pvcreate /dev/md1

Physical volume "/dev/md1" successfully created

# vgextend rootvg /dev/md1

Volume group "rootvg" successfully extended

25. Move all LVM physical extents off /dev/sda3:

# pvmove -v /dev/sdb2

<-- snip -->

26. Check that all PEs have been moved to /dev/md1:

# pvdisplay

--- Physical volume ---

PV Name /dev/sdb2

VG Name rootvg

PV Size

Allocatable yes

PE Size (KByte)

Total PE

Free PE

Allocated PE

PV UUID
--- Physical volume ---

PV Name /dev/md1

VG Name rootvg

PV Size

Allocatable

PE Size (KByte)

Total PE

Free PE

Allocated PE

PV UUID

27. They have, so now we can remove and delete the PV /dev/sda3:

28. # vgreduce rootvg /dev/sdb2

29. Removed "/dev/sdb2" from volume group "rootvg"

30. # pvremove /dev/sdb2

31. Labels on physical volume "/dev/sda3" successfully wiped

32. Now we are free to add /dev/sda3 to /dev/md1:

# mdadm --manage /dev/md1 --add /dev/sdb2

mdadm: added /dev/sdb2

# cat /proc/mdstat

Personalities : [raid1]

md0 : active raid1 sdb1[1] sda1[0]

[2/2] [UU]
md1 : active raid1 sda3[2] sda2[0]

[>....................] recovery = 0.5% (1336064/241183744)


finish=125.5min speed=31842K/sec

unused devices: <none>

33. When recovery has completed, install grub on the drives:

# grub

grub> root (hd0,0)

grub> setup (hd0)

grub> root (hd1,0)

grub> setup (hd1)

You might also like