You are on page 1of 10

On my 240 GB SSD I had at first two partitions, one containing the Logical Volume with Linux

Mint and the other had contained a NTFS partition to share with Windows. Now I removed the
NTFS partition and want to extend my logical volume group to use the released disk space.

How do I extend the volume group, my logical volume containing /home and the filesystem (ext4)
on /home? Is this possible to do online?

/dev/sdb/ (240GB)
linuxvg (160GB) should use 100% of the disk space
swap
root
home (ext4, 128GB) should be extended to use the remaining space

output of sudo vgdisplay:


--- Volume group ---
VG Name linuxvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 160,00 GiB
PE Size 4,00 MiB
Total PE 40959
Alloc PE / Size 40959 / 160,00 GiB
Free PE / Size 0/0
VG UUID ...

--- Logical volume ---


LV Path /dev/linuxvg/swap
LV Name swap
VG Name linuxvg
LV UUID ...
LV Write Access read/write
LV Creation host, time mint, 2013-08-06 22:48:32 +0200
LV Status available
LV Size 8,00 GiB
Current LE 2048
Segments 1
Allocation inherit
Block device 252:0

--- Logical volume ---


LV Path /dev/linuxvg/root
LV Name root
VG Name linuxvg
LV UUID ...
LV Write Access read/write
LV Creation host, time mint, 2013-08-06 22:48:43 +0200
LV Status available
LV Size 24,00 GiB
Current LE 6144
Segments 1
Allocation inherit
Block device 252:1

--- Logical volume ---


LV Path /dev/linuxvg/home
LV Name home
VG Name linuxvg
LV UUID ...
LV Write Access read/write
LV Creation host, time mint, 2013-08-06 22:48:57 +0200
LV Status available
LV Size 128,00 GiB
Current LE 32767
Segments 1
Allocation inherit
Block device 252:2

--- Physical volumes ---


PV Name /dev/sdb1
PV UUID ...
PV Status allocatable
Total PE / Free PE 40959 / 0

output of sudo fdisk -l:


Disk /dev/sdb: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 468862127 234431063+ ee GPT

Disk /dev/mapper/linuxvg-swap: 8589 MB, 8589934592 bytes


255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/linuxvg-root: 25.8 GB, 25769803776 bytes


255 heads, 63 sectors/track, 3133 cylinders, total 50331648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/linuxvg-home: 137.4 GB, 137434759168 bytes


255 heads, 63 sectors/track, 16708 cylinders, total 268427264 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Well, the easy way would've been to just pvcreate the NTFS partition and use vgextend, instead of
removing the partition entirely. If you grow the existing PV partition instead you probably have to
reboot, as Linux refuses to re-read the partition table while the disk is in use. Working around this
online is awkward.

The question was solved, after reading this blog post. I will write the solution in short form:
1) boot from a live cd with
2) use gdisk (if you use GPT) otherwise you could go with good old fdisk
3) note your partition settings, in my case gdisk -l /dev/sdb
4) delete your partition with
5) create a new partition with the exact same alignment as the previous one (in my example
starting at block 2048)
6) write your new partition table
7) run partprobe -s to refresh the partition table without a reboot
8) resize your PV with pvresize /dev/sdb1 or wherever your pv is (use pvs to determine if you
don't know)
9) now resize your logical volume with lvextend -l +100%FREE /dev/file/of/your/lv, in my case
sudo lvextend -l +100%FREE /dev/linuxvg/home
10) resize the filesystem sudo resize2fs /dev/linuxvg/home
11) first check the consistency sudo e2fsck -f /dev/linuxvg/home

You can do this entire process while running on the filesystem you want to resize (yes, it's safe and
fully supported). There is no need for rescue CDs or alternate operating systems.
1). Resize the partition (again, you can do this with the system running). GParted is easy to use
and supports resizing.
You can also use a lower level tool such as fdisk. But you'll have to delete the partition and recre
ate it. Just make sure when doing so that the new partition starts at the exact same location.
2). Reboot. Since the partition table was modified on the running system, it won't take effect until
a reboot.
3). Run pvresize /dev/sdXY to have LVM pick up the new space.
4). Resize the logical volume with lvextend. If you want to use the whole thing:
lvextend -r -l +100%FREE /dev/VGNAME/LVNAME. The -r will resize the filesystem as well.
Though I always recommend against using the entire volume group. You never know what you'll
need in the future. You can always expand later, you can't shrink.

Let's get the facts:


OP has two partitions, sdb1 and sdb2 is a physical volume for LVM.
sdb1 is ntfs right now, we need to give that space to home logical volume inside linuxvg VG..

LVM steps using the "pragmatic way":


create physical volume on sdb1: pvcreate /dev/sdb1
add sdb1 to linuxvg: vgextend linuxvg /dev/sdb1
extend logical volume home with all free space: lvextend -l +100%FREE /dev/linuxvg/home
extend ext4 fs: resize2f /dev/linuxvg/home

LVM allows great level of indirection. A logical volume is inside a volume group, which could be
using several disks.
home --> linuxvg --> (sdb1, sdb2, sdc1)
http://tldp.org/HOWTO/LVM-HOWTO/createvgs.html

If you're using xfs, then you use the command : xfs_growfs –d /mountpoint
rather than resize2fs. You can do that while that FS mount is active, such as if you've grown the
root partition, and you don't need to reboot after.

*** This is a step by step procedure to discover the LUNS in Linux which are allocated from stora
ge end and then create a Volume group with those LUNS using Linux LVM2 ***

Discovering the LUNS using HDLM in Linux :


* * Once you get the confirmation from storage that LUN’s are allocated to the server required,
follow this procedure. Storage will provide you with LUN ID’s allocated to the server.
1. echo “- – -” > /sys/class/scsi_host/host*/scan – Do this for “host0 – 15”.
2. dlmcfgmgr -r – Re-configure / rescan the HDLM.

* * After executing the 2nd step, you will receive message from HDLM stating that, there is a
change in configuration, which indicates that LUN’s are identified in server.

# /opt/DynamicLinkManager/bin/dlnkmgr view -path – Will show new LUN’s discovered.


Compare them with LUN ID’s storage team has provided earlier.

* * Then take the device files of those new LUN’s. Ex: /dev/sddlmaX
* * Here the LUN’s i’ve got are as follows. Total 5 LUN’s, each 100 GB.
/dev/sddlmag
/dev/sddlmah
/dev/sddlmai
/dev/sddlmaj
/dev/sddlmak

Creating Volume group using Linux LVM2 :


* * First create PV (Physical Volume) for each device as follows.
# lvm pvcreate /dev/sddlmag
Physical volume “/dev/sddlmag” successfully created

# lvm pvcreate /dev/sddlmah


Physical volume “/dev/sddlmah” successfully created

# lvm pvcreate /dev/sddlmai


Physical volume “/dev/sddlmai” successfully created

# lvm pvcreate /dev/sddlmaj


Physical volume “/dev/sddlmaj” successfully created

# lvm pvcreate /dev/sddlmak


Physical volume “/dev/sddlmak” successfully created

# lvm vgcreate data2vg /dev/sddlmag – Created VG with name “data2vg” adding disk
/dev/sddlmag to it and then add remaining four Lun’s into VG “data2vg” with following cmd.
# vgextend data2vg /dev/sddlmai
# vgextend data2vg /dev/sddlmaj
# vgextend data2vg /dev/sddlmak
# pvs – Shows Physical Volume info.
PV VG Fmt Attr PSize PFree
/dev/sddlmaa datavg lvm2 a- 100.00G 0
/dev/sddlmab datavg lvm2 a- 100.00G 0
/dev/sddlmac datavg lvm2 a- 100.00G 0
/dev/sddlmad datavg lvm2 a- 100.00G 0
/dev/sddlmae datavg lvm2 a- 100.00G 0
/dev/sddlmaf prodvg lvm2 a- 50.00G 0
/dev/sddlmag data2vg lvm2 a- 100.00G 100.00G
/dev/sddlmah data2vg lvm2 a- 100.00G 100.00G
/dev/sddlmai data2vg lvm2 a- 100.00G 100.00G
/dev/sddlmaj data2vg lvm2 a- 100.00G 100.00G
/dev/sddlmak data2vg lvm2 a- 100.00G 100.00G

* * “data2vg” has 5 PV’s each 100GB as said earlier. #lvs is not complete unless we create logical
volume for this VG. Hence it looks as follows.
# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
datalv datavg -wi-ao 499.98G
prodlv prodvg -wi-ao 50.00G

* * #vgs is complete as volume group is created and looks fine from below output.
# vgs
VG #PV #LV #SN Attr VSize VFree
data2vg 5 0 0 wz–n- 499.98G 499.98G
datavg 5 1 0 wz–n- 499.98G 0
prodvg 1 1 0 wz–n- 50.00G 0

* * To create LV, we need to know the No.of PE’s (Physical Extent) for that particular Volume
Group created.We can get that Number by #vgdisplay data2vg
# vgdisplay data2vg
— Volume group —
VG Name data2vg
System ID
Format lvm2
Metadata Areas 5
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 5
Act PV 5
VG Size 499.98 GB
PE Size 4.00 MB
Total PE 127995
Alloc PE / Size 0/0
Free PE / Size 127995 / 499.98 GB
VG UUID 9tBXcp-rLL0-ZWLD-8eMi-70N0-RTuU-Z75XgD

* * Now to create the LV by executing the following command :


# lvm lvcreate -l 127995 -n data2lv data2vg
Logical volume “data2lv” created

* * Create the file system in newly created LV.


# mkfs -t ext3 /dev/data2vg/data2lv
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
65536000 inodes, 131066880 blocks
6553344 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
4000 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

* * Now create the directory and give it appropriate permissions required and mount the VG / File
system that is created.
# mkdir /oradata
# chown oracle:dba /oradata
# chmod 777 /oradata
# mount /dev/data2vg/data2lv /oradata
# df -Th /oradata
/dev/mapper/data2vg-data2lv ext3 493G 198M 467G 1% /oradata

* * Make a following entry in /etc/fstab , so it is mounted at boot time.


/dev/data2vg/data2lv /data2 ext3 defaults 12
1) Check attached LUN or SAN disk in Linux
To check the attached LUN from a storage device in Linux, we can use the /proc/scsi/scsi file
content but it will give you some information and you can not be able to distinguish physical
attached drive to LUN. Display the content as below as below
# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: VMware, Model: VMware Virtual S Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 01 Lun: 00
Vendor: VMware, Model: VMware Virtual S Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 02 Lun: 00
Vendor: VMware, Model: VMware Virtual S Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: NECVMWar Model: VMware IDE CDR10 Rev: 1.00
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: LIO-ORG Model: block Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 01
Vendor: LIO-ORG Model: block2 Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 02
Vendor: LIO-ORG Model: rhelblock Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
Normally Luns would be showing as Host: scsi3 Channel: 00 Id: 00 Lun: 00

Below is another example of the same file where it's using a different storage vendor.
# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: HP 36.4G Model: MAN3367MC Rev: HP05
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: COMPAQ Model: HSV110 (C)COMPAQ Rev: 2003
Type: Unknown ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 02
Vendor: COMPAQ Model: HSV110 (C)COMPAQ Rev: 2003
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 03
Vendor: COMPAQ Model: HSV110 (C)COMPAQ Rev: 2003
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 01 Lun: 00
Vendor: COMPAQ Model: HSV110 (C)COMPAQ Rev: 2003
Type: Unknown ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 01 Lun: 02
Vendor: COMPAQ Model: HSV110 (C)COMPAQ Rev: 2003
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 01 Lun: 03
Vendor: COMPAQ Model: HSV110 (C)COMPAQ Rev: 2003

You can use iscsiadm (only used when storage using iscsi target) command to get information
about attached lun.
# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-35
Target: iqn.2017-06.com.linoxide:target1 (non-flash)
Current Portal: 172.16.20.139:3260,1
Persistent Portal: 172.16.20.139:3260,1
**********
Interface:
**********
............
............
************************
Attached SCSI devices:
************************
Host Number: 3 State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdd State: running
scsi3 Channel 00 Id 0 Lun: 1
Attached scsi disk sde State: running
scsi3 Channel 00 Id 0 Lun: 2
Attached scsi disk sdf State: running

You can also check below path for lun information.


# ls /dev/disk/by-path/
ip-172.16.20.139:3260-iscsi-iqn.2017-06.com.linoxide:target1-lun-0
ip-172.16.20.139:3260-iscsi-iqn.2017-06.com.linoxide:target1-lun-1
ip-172.16.20.139:3260-iscsi-iqn.2017-06.com.linoxide:target1-lun-2
pci-0000:00:07.1-ata-2.0
pci-0000:00:10.0-scsi-0:0:0:0
pci-0000:00:10.0-scsi-0:0:0:0-part1
pci-0000:00:10.0-scsi-0:0:0:0-part2
pci-0000:00:10.0-scsi-0:0:1:0
pci-0000:00:10.0-scsi-0:0:2:0
Also, try using dmesg command
# dmesg | grep -i "attached "
Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
Attached scsi disk sdb at scsi2, channel 0, id 0, lun 2
Attached scsi disk sdc at scsi2, channel 0, id 0, lun 3
Attached scsi disk sdd at scsi2, channel 0, id 1, lun 2

2) Using multipath command


Redhat default multipathing service is multipathd daemon. Below commands are from a server
that has a multipathing enabled using multipathd daemon and from its output you can check OS
identified Lun information.

# multipath -v4 -ll


Jun 21 04:58:40 | loading /lib64/multipath/libcheckdirectio.so checker
Jun 21 04:58:40 | loading /lib64/multipath/libprioconst.so prioritizer
Jun 21 04:58:40 | Discover device
/sys/devices/pci0000:00/0000:00:07.1/ata2/host2/target2:0:0/2:0:0:0/block/sr0
Jun 21 04:58:40 | sr0: device node name blacklisted
Jun 21 04:58:40 | Discover device
/sys/devices/pci0000:00/0000:00:10.0/host0/target0:0:0/0:0:0:0/block/sda
................................
................................
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev dev_st
0:0:0:0 sda 8:0 -1 undef ready VMware, ,VMware Virtual S running
0:0:1:0 sdb 8:16 -1 undef ready VMware, ,VMware Virtual S running
0:0:2:0 sdc 8:32 -1 undef ready VMware, ,VMware Virtual S running
3:0:0:0 sdd 8:48 -1 undef ready LIO-ORG ,block running
3:0:0:1 sde 8:64 -1 undef ready LIO-ORG ,block2 running
3:0:0:2 sdf 8:80 -1 undef ready LIO-ORG ,rhelblock running
Jun 21 04:58:40 | directio checker refcount 6
Jun 21 04:58:40 | directio checker refcount 5
Jun 21 04:58:40 | directio checker refcount 4
Jun 21 04:58:40 | directio checker refcount 3
Jun 21 04:58:40 | directio checker refcount 2
Jun 21 04:58:40 | directio checker refcount 1
Jun 21 04:58:40 | unloading const prioritizer
Jun 21 04:58:40 | unloading directio checker

You might also like