You are on page 1of 281

SUCCESS WITH LOGICAL VOLUME MANAGER

♦ Cover Page

CES2-DISTANCELVM
August 2000

Date 13-Nov-00 CES2-DISTANCELVM


Cover.doc HSD Field Development
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Copyrights

Notice

The information contained in this document is subject to change without notice.

HEWLETT-PACKARD PROVIDES THIS MATERIAL "AS IS" AND MAKES NO


WARRANTY OF ANY KIND, EXPRESSED OR IMPLIED, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE. HEWLETT-PACKARD SHALL NOT
BE LIABLE FOR ERRORS CONTAINED HEREIN OR FOR INCIDENTAL OR
CONSEQUENTIAL DAMAGES (INCLUDING LOST PROFITS) IN CONNECTION
WITH THE FURNISHING, PERFORMANCE OR USE OF THIS MATERIAL

WHETHER BASED ON WARRANTY, CONTRACT, OR OTHER LEGAL THEORY.

Some states do not allow the exclusion of implied warranties or the limitation or exclusion of
liability for incidental or consequential damages, so the above limitations and exclusion may
not apply to you. This warranty gives you specific legal rights, and you may also have other
rights which vary from state to state.

Hewlett-Packard assumes no responsibility for the use or reliability of its software on


equipment that is not furnished by Hewlett-Packard.

This document contains proprietary information which is protected by copyright. All rights
reserved. No part of this document may be photocopied, reproduced or translated to another
program language without the prior written consent of Hewlett-Packard Company.

Copyright 2000 by HEWLETT-PACKARD COMPANY

Printing History
First Edition . . . . . . . . . . . . . April 2000

Date 13-Nov-00 CES2-DISTANCELVM


Cover.doc HSD Field Development
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Table of Contents

Table of Contents

Date 13-Nov-00 CES2-DISTANCELVM


TOC.doc HSD Field Development
i-i
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Table of Contents

Module 1: Introduction ...........................................................................................................1-1

Module 2: Concepts ................................................................................................................ 2-1

Module 3: Rules......................................................................................................................3-1

Module 4: Structures...............................................................................................................4-1

Module 5: Lab.........................................................................................................................5-1

Module 6: Creating a Logical Volume ...................................................................................6-1

Module 7: Extending a Logical Volume.................................................................................7-1

Module 8: Reducing a Logical Volume ..................................................................................8-1

Module 9: Removing a Logical Volume ................................................................................ 9-1

Module 10: Creating a New VG with LVs ...........................................................................10-1

Module 11: Extending a VG................................................................................................. 11-1

Module 12: Extending an LV to Another PV .......................................................................12-1

Module 13: Reducing a VG.................................................................................................. 13-1

Module 14: Removing a VG................................................................................................. 14-1

Module 15: Lab..................................................................................................................... 15-1

Module 16: vgimport/vgexport ............................................................................................. 16-1

Module 17: Pvlinks ...............................................................................................................17-1

Module 18: pvmove .............................................................................................................. 18-1

Module 19: Change Commands............................................................................................ 19-1

Module 20: Lab..................................................................................................................... 20-1

Module 21: Root VG Structures ...........................................................................................21-1

Module 22: Recovery............................................................................................................ 22-1

Module 23: Lab..................................................................................................................... 23-1

Date 13-Nov-00 CES2-DISTANCELVM


TOC.doc HSD Field Development
i-ii
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 1: Introduction

Module 1

Introduction

Date 13-Nov-00 CES2-DISTANCELVM


1.doc HSD Field Development
1-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 1: Introduction

Introduction

l Why do I need to
know LVM?

l What’s in it for me?

Why LVM?

LVM is a fairly complex subject. Depending upon your job scope will determine how
involved you will be with LVM. However, regardless of your involvement, it is important for
any engineer working in an HP-UX environment to understand at least the basics of LVM.

First and foremost is being able to troubleshoot and repair INTELLIGENTLY in the
environemnt you are working in. This will result in minimum downtime, disruption to data
access, and potential data loss to the customer.

Additionally, HP would like to continually improve the field’s strength and knowledge to be
the “BEST IN CLASS” as perceived by customers. Long past has the computer industry had
strictly hardware or software engineers. There needs to be at least a basic understanding of
both, with one’s strength in one or the other area. It would be best if one was strong in both
areas.

As HP’s workforcebecomes more involved in High Availability environments, LVM is a


major part today in the HA products. It is not always so simple to just “replace a disk
mechanism”. Certain procedures MUST be understood and adhered to in order to achieve the
goal of uptime and data availability without corruption.

Date 13-Nov-00 CES2-DISTANCELVM


1.doc HSD Field Development
1-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 1: Introduction

Introduction

l Course Logistics

How do I take this course?

This course is delivered in a combination of distance lecture with remote labs. The course is
broken into four (4) days. Each day is intended to be no more than one-half (1/2) day. Each
day is independent of the previous, however, each day’s topics are built upon the previous

The desire is to supply each participant their own machine. The instructor will provide the
appropriate information as to machine name, IP address, passwords, etc.

The labs can be performed on a local machine if desired. Be aware that the labs were written
for a specific configuration which should be explained at the beginning of each lab day.

Mentoring will be provided by your instructor. This will be discussed by your instructor.

The lecture portion is delivered over the HP Intranet using WEB technology. Your instructor
will explain the logistics on the first day.

Date 13-Nov-00 CES2-DISTANCELVM


1.doc HSD Field Development
1-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 1: Introduction

Agenda

Day 1: Day 3:
Introductions Importing/exporting
Logistics Pvmove
LVM Basics Pvlinks
Obtaining/Displaying Changing
Labs Labs

Day 2: Day 4:
Creating Root Volume Group
Modifying Recovery
Labs Labs
Critique

Agenda

The course is broken down into 4 days as listed on the slide. If time permits, there will be a
brief review each morning before the first topic.

Date 13-Nov-00 CES2-DISTANCELVM


1.doc HSD Field Development
1-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 1: Introduction

Course Objectives
The overall objectives of this course are to be able to:

• Describe HP’s Logical Volume Manager


basic concepts and rules
• Create and remove Logical Volume
Manager volume groups and logical
volumes
• Use Basic Recovery techniques of Logical
Volume Manager

The following is a more detailed list of the Student Performance Objectives (SPOs) of this
course. The student is expected to be able to:
• Describe the concepts of Logical Volume Manager
• Describe a physical volume, volume group, and logical volume
• Describe physical and logical extents and contiguous/non-contiguous extent allocation
• Identify the rules associated with LVM
• Differentiate between a boot and data physical volume
• Describe some of the LVM metadata areas
• Display LVM information using different commands
• Backup and restore LVM metadata
• Activate and deactivate volume groups
• Create, modify, and remove volume groups and logical volumes
• Import and export volume groups
• Move a logical or physical volume from one physical volume to another within a volume
group
• Describe LVMs “pvlinks” feature
• Describe and use the “change” commands
• Boot in maintenance mode
• Boot without quorum
• Describe the LVM boot process
• Basic LVM Recovery techniques

Date 13-Nov-00 CES2-DISTANCELVM


1.doc HSD Field Development
1-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 1: Introduction

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


1.doc HSD Field Development
1-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

Module 2

Concepts

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

What is Logical Volume


Manager?
disc MANAGEMENT subsystem
NOT a filesystem

application

device disc
LVM
file driver

HP-UX kernel

LOGICAL VOLUME MANAGER BASICS

The Logical Volume Manager (LVM) is a disk management subsystem that lets you allocate
disk space according to the specific or projected sizes of your file systems or raw data. LVM
file systems can exceed the size of a physical disk. This feature is known as disk spanning,
because a single file system can span disks.

Using LVM, you can combine one or more disks (physical volumes) into a volume group,
which can then be subdivided into one or more logical volumes.

LVM is not a file system. It is a manager that points to the start and end of logical volume
data space for each physical disk that the logical volume happens to span.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

"Soft" Partitions

LVOL1
Disk Sectioning Scheme
Boot 6

LVOL2
0 15
7

1 14

2
10 12

3 13
11
4
8
9

LVOL3
5

Logical volumes can range in size from 1 MB to 128 GB.

NOTE The size depends upon the HP-UX release. Refer tot he appropriate
documentation for each release to determine the value.

Maximum size: File system 128GB ; Root 2GB; Raw data, dump, swap 2GB

NOTE please refer to /usr/share/doc/10.20RelNotes file

• Logical volumes can be expanded or reduced in size as needs change, if NOT configured
as contiguous.

CAUTION Reducing a logical volume’s size will result in data loss.


Always back up logical volume data before reducing logical volume size.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

Elements of LVM
/dev/rdsk/c2t4d0 /dev/rdsk/c2t5d0

physical
volumes

volume
VG01 group

logical
lvol1 lvol2 lvol3 volumes

• An LVM system consists of groupings of disks initialized for LVM and organized into
volume groups. A volume group might consist of one or many LVM disks; your entire
system might consist of one or several volume groups.

• While volume groups represent groupings of one or more LVM disks, logical volumes
represent subdivisions of a volume group’s total disk space into virtual disks.

• Logical volumes can encompass space on one or more LVM disk and represent only a
portion of one or more LVM disks.

So then:
• Physical Volumes are combined to form
• Volume Groups . These are divided into
• Logical Volumes

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

LVM OVERVIEW

The Logical Volume Manager (LVM) is not a filesystem but rather a disk management
subsystem that offers access to filesystems as well as features such as disk mirroring, disk
spanning, and dynamic partitioning. Since LVM is the preferred method of managing disks
on HP-UX 10.0, a discussion of it is relevant to our study of filesystems.

While it is assumed that you already have working knowledge of LVM, we will define the
basic terms used within LVM so that there is common understanding.

Physical Volume Physical device where data is stored (e.g. a disk).

Volume Group A set of physical and logical volumes and the data mapping that exists
among them. A volume group can be thought of as a medium between
the storage capabilities offered by physical volumes and the abstraction
of the logical volumes. It is a storage repository in the sense that it
groups together all of the physical extents of physical volumes that are
part of the volume group itself. Membership in a specific volume
group allows for mapping among the logical and physical volumes of
which the group is comprised. Logical volumes cannot map to physical
volumes that are members of different volume groups and each
physical volume can belong to exactly one volume group.

Logical Volume Virtual mapping of data to physical volumes or to areas within one or
more physical volumes. A logical volume is a conceptual construct and
not, in itself, a physical extent. Consequently, a logical volume can be
conceptually viewed as a storage device without physical boundaries.
It can be larger than any one physical device and can consist of several
physical devices or portions of physical devices.

Mirrored Data Refers to copies of data stored in physical extents (blocks) that map to
unique logical extents. Data can be single mirrored (one additional
copy) or doubly mirrored (two additional copies). If the data is single
mirrored, two physical extents are allocated for each logical extent.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

LVM Disks
Boot Disk Data Disk
LIF Header PVRA
PVRA VGRA
BDRA
LIF Volume User
VGRA
User
Data
Data Area
Area
Bad Block Pool Bad Block Pool

LVM Overview

There are two ways physical volumes can be created in LVM. One is a bootable disk and the
other is a data disk. The structures on these physical volumes will be discussed in the
Architecture section.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

LVM Boot Disk


Boot Disk

}
LIF Header
PVRA pvcreate -B
BDRA mkboot
LIF Volume
VGRA

}
vgcreate
User

Data lvcreate/lvextend

Area
Bad Block Pool * pvcreate -B
* Not used by root, swap, or dump areas

LVM Overview

The slide shows what commands will create and/or modify the different structures on an
LVM bootable disk. Notice that pvcreate -B creates the bootable partitions.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

LVM Data Disk


Data Disk
PVRA pvcreate

}
VGRA vgcreate/vgextend

User

Data lvcreate/lvextend

Area

Bad Block Pool pvcreate

LVM Overview

A data disk is created by using the pvcreate command without the -B option. All other
commands are the same except the mkboot command MUST NOT be used on a data disk.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

LVM Spans Disks


Volume Group
Physical Volumes

PV1 PV2 PV3


lvol1 lvol2 lvol1

lvol2 lvol1 lvol2

Logical Volumes

• File systems can be larger than a physical disk’s size.

• Logical volumes can grow and shrink allowing more efficient space usage.

• Large-scale applications, such as databases and CAD/CAE systems, whose data


requirements often exceed the capacity of a single disk can be expanded across disks.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

Logical Physical
Extents Extents

1 4 Mb 4 Mb 1
2 4 Mb 4 Mb 2
3 3
4 4
5 5
6 6
7 7

LVM’s Logical to Physical Extent Mapping

Extent size must be a power of 2 (range: 1 - 256 MB). Set by vgcreate -s

Logical Extent size = Physical Extent size

The Physical Extent size is the same for all Physical Volumes in the Volume Group.

The LVM subsystem maps logical extents to physical extents via a translation table that
resides on the LVM disk. When the volume group is activated, the table resides in real
memory. LVM translates incoming read and write requests to the correct address of the
physical extent, then sends the request to the corresponding physical block.

Thus, the extent serves as a translation mechanism between the incoming request and
underlying device drivers.

By default, LVM selects available physical extents from LVM disks in the order the disks
were added to the volume group.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

Non-Contiguous Allocation
Logical Extent
table for lvol1 Physical Volume
1 LE 1
2 LE PE
2
3 LE PE
PE 3
4 LE 4
PE
5 LE 5
PE
6 LE 6
PE
7 LE 7
PE
PE 8
PE 9
PE 10
Note: Shaded PE's in PV represent lvol1

• Non-contiguous disk space allocation means that the logical extents of a logical volume
need not be mapped to adjacent physical extents

• This is the default allocation policy.

• Gaps are allowed between allocated Physical Extents (PE’s).

• PE’s can be taken from other disks in the Volume Group.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-10
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

Non-Contiguous Allocation Policy


PE
PE
PE
PE PE
PE PE
PE PE
PE PE PE
PE PE PE
PE PE
/dev/rdsk/c0t0d0 PE PE
PE PE
PE
/dev/rdsk/c0t5d0 PE
PE

/dev/rdsk/c0t4d0

Note: A logical volume can be spread over several PV's in a VG

• PE’s may be on any disk in the Volume Group.

• PE’s may be allocated by chance or selected through use of the lvextend command.

• To limit the allocation to specific physical volumes, specify the physical volume names as
pv_path arguments or specify the physical volume group names as pvg_name
arguments.

EXAMPLE:

To extend a logical volume onto a particular physical volume:

# lvextend /dev/vg01/lvol1 /dev/dsk/c0t5d0

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-11
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

Non-Contiguous Allocation Policy

PE
PE
PE
PE
PE PE
PE PE
PE PE
PE PE PE
PE PE PE
PE
PE
/dev/rdsk/c0t0d0 PE
PE
PE
/dev/rdsk/c0t5d0 PE
PE
PE
PE

/dev/rdsk/c0t4d0

• LVM allocates the first free PE’s from the Volume Group.
– SAM and lvcreate use this default behavior.
– Some disks may be under-used or even not used at all.

• Use lvextend to distribute the Logical Volume more evenly.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-12
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 2: Concepts

Contiguous Allocation

1 LE 1
2 LE PE
2
LE PE
3 3
LE PE
4 4
LE PE
5 5
PE
6 LE 6
PE
7 LE 7
PE

• Created using lvcreate -C y or lvchange –C y

• There cannot be any gaps between its allocated PE’s.

• The PE’s must be allocated in ascending, numerical order.

• A contiguous Logical Volume cannot span Physical volumes.

• Contiguous disk space allocation means that the logical extents (LE) of a logical volume
must be mapped to adjacent physical extents (PE) on an LVM disk.

• Disk space used by the root logical volume must be contiguous. That is, physical extents
mapping to the logical volumes of the root file system, primary swap, and dump must
each be contiguous. Contiguous extents also adhere to the following requirements:

– Physical extents must be allocated in ascending order.


– No gap may exist between physical extents
– When mirrored, all physical extents of a mirrored copy must reside on the
same disk.

• Contiguous allocation is less flexible than non-contiguous allocation and therefore uses
disk space less economically. Non-contiguous mapping can result in a logical volume’s
physical extents being dispersed onto more than one LVM disk, since logical volumes can
span multiple disks.

Date 13-Nov-00 CES2-DISTANCELVM


2.doc HSD Field Development
2-13
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 3: Rules

Module 3

Rules

Date 13-Nov-00 CES2-DISTANCELVM


3.doc HSD Field Development
3-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 3: Rules

Logical Volume Manager Rules

1) A disk drive must be dedicated exclusively


to LVM

Section 7 Section 0
Conventional LVM
Section 11
LVM

LOGICAL VOLUME MANAGER RULES

• A disk drive must be dedicated exclusively to LVM. Using any section other than section
0 is not supported.

Date 13-Nov-00 CES2-DISTANCELVM


3.doc HSD Field Development
3-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 3: Rules

Logical Volume Manager Rules

2) A disk drive can be a member of only one


volume group

Vol. Group Vol. Group


X X
Vol. Group
Y

LOGICAL VOLUME MANAGER RULES (Continued)

• A disk drive can be a member of only one volume group.

• The maximum number of logical volumes in a volume group is 255.


(Range is 1 - 255; Default is 255; set using vgcreate -l)

• The maximum number of physical extents allowed per physical volume is 65,535.
(Range is 1 - 65,535; Default is 1016; set using vgcreate -e)

• Maximum number of physical volumes per volume group is 255.


(Range is 1 - 255; Default is 16; set using vgcreate -p)

Maximum number of volume groups is set in the kernel by the maxvgs parameter to default
of 10. In order to change this, a new kernel must be generated with the desired value and can
be increased to as high as 256 if necessary.

Date 13-Nov-00 CES2-DISTANCELVM


3.doc HSD Field Development
3-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 3: Rules

Logical Volume Manager Rules

3) Volume group must have “quorum”


Volume Group
PV1 PV2

Should PV1 fail then quorum isn't present

LOGICAL VOLUME MANAGER RULES (Continued)

• More than half of the configured LVM disks in a volume group must be present to change
or activate that volume group. Quorum is checked both during configuration changes (for
example, when creating a logical volume) and at state changes (for example, if a disk
fails). If quorum is not maintained, LVM will not acknowledge the change. To override
quorum check use vgchange with the –q n option. If quorum is not met on the root
volume group, the system will not boot.

Use ISL> hpux –lq to boot the system when quorum isn’t present.

• Any time a change is made to the root volume group (i.e., root, swap, dump, or file
system) the Boot Data Reserved Area (BDRA) must be updated using the lvlnboot
command. Failure to do this may result in an unbootable system.

Date 13-Nov-00 CES2-DISTANCELVM


3.doc HSD Field Development
3-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Module 4

Structures

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Metadata
LVM File System
LIF Header Data
PVRA Primary Blocks

BDRA
Super
Block
LIF Volume
Super
Block
VGRA Cylinder
group 1
Cylinder
User group
Cylinder informa-
group 2 tion

Data
partition

. Inode
. Table
.
Area Data
Cylinder Blocks
Bad Block Pool group n

METADATA

User Data is file data that is manipulated in some form (e.g. editing a file, executing
command, etc.).

Metadata is information (data) that the ordinary user does not see or access, at least in the
fashion of User data, but provides information to the system for storing and retrieving User
Data.

HP-UX file system metadata always includes superblocks, inodes, and datablocks. Other
structures depend upon the file system type.

Like a file system, LVM has metadata structures which are out on a physical volume but are
NOT visible unless special commands are used to display them. These commands are
typically only executable by root (superuser).

Without metadata, the system would not be able to locate or store any files on a physical
volume, regardless of whether it is LVM, whole disk, disk partition, etc. Additionally, file
system metadata is placed on top of the LVM metadata.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM Physical Disk Layout

LIF Header Reserved


Volume Group
Descriptor Area
PVRA
BDRA LVM Record
Volume Group
Status Area
LIF Volume Bad BlockDir
VGRA
Mirror Reserved
Consistency User
Records Duplicate
LVM Record
Duplicate Info.
Data Duplicate
Bad BlockDir

Area
Bad Block Pool

LVM METADATA

All physical volumes in an LVM configuration will have a PVRA (Physical Volume
Reserved Area) and a VGRA (Volume Group Reserved Area).

The PVRA contains an LVM Record section which contains normal physical volume
metadata information for this physical volume. The Bad Block directory section contains any
bad block, or sectors, found on this physical volume.

The VGRA contains 3 sections: the Volume Group Descriptor area, which contains volume
group information; the Volume Group Status Area, which contains status’ of the volume
group; and the MCR, which contains information about mirrored extents.

More information on these is in the next few slides.

The PVRA and VGRA areas have duplicate records of their sections.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM PVRA Disk Structures


lvm_id
LIF Header Reserved
pv_id
PVRA
vg_id
BDRA LVM Record
last_ psn
LIF Volume Bad Block Dir pv_num
VGRA
Reserved vgra_len,psn
User vgda_len,psn
Duplicate
LVM Record vgsa_len
Data Duplicate mcr_len,psn
Bad Block Dir usr_data info
alt_pool_info
Area max_defects
reserved
Bad Block Pool
BDRA info

LVM PVRA METADATA

The LVM Record portion of the PVRA contains specific information about this physical
volume. Many of these fields are displayed by the pvdisplay command.

Each physical volume contains a pv_id field. We will call this the PVID. The PVID is made
up of two words. The first word is the CPU SWID (software ID) of the machine that
performed the pvcreate command. The second word is a timestamp of the exact second
the pvcreate command was performed.

The vg_id field, hence called the VGID, is like the PVID. It is made up of two words, the
CPU’s SWID that performed the vgcreate command and a timestamp. It is possible that
one machine did the pvcreate and another did the vgcreate. This is OK.

The VGID will be spread across all physical volumes in a volume group; however, each
physical volume should have a unique PVID.

The last_psn, or last physical sector number, field can be used to calculate the entire size
of this physical volume. Multiply this number by 1024 to obtain the size.

The pv_num, or physical volume number, field indicates the order in which this physical
volume was added to the volume group. A 0 indicates this was the first physical volume
added to the volume group.

Additionally, the PVRA contains pointers to the other LVM metadata areas.

There are several other fields which are not covered in this course.
Date 13-Nov-00 CES2-DISTANCELVM
4.doc HSD Field Development
4-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM PVRA Disk Structures


LIF Header Reserved
PVRA
BDRA LVM Record
LIF Volume Bad Block Dir Reason Defect PSN
VGRA Status Alt PSN
Reserved
User
Duplicate
Sector 423
LVM Record
Data Duplicate
Bad Block Dir

Area
Bad Block Pool
Sector 1724

BAD BLOCK RELOCATION

Every request that returns to LVM is checked for media failures. If a failure is encountered,
LVM will first send down a request asking the driver to attempt relocation. If this fails, it will
relocate the buffer itself from an internal pool of extra blocks. These blocks comprise the last
area of the physical volume that falls behind the last extent. An area on the disk is reserved as
a directory which contains the mapping of bad blocks as well as state information for each.
This area is called the bad block directory

Each entry in the bad block directory (bbdir) is of type lv_bblk. Each of these fields
have four bits of flags and 28 bits containing the block address. The defect_reason
records the reason for the defect and the original block that was in error. Valid reasons are:

Name Value Meaning


DEFECT_MFR 0x01 Found by manufacturer
DEFECT_DIAG 0X0A Found by diagnostics
DEFECT_SYS 0x0B Found by the LVM driver
DEFECT_MFRTST 0x0C Found by manufacturer testing

Currently the HP-UX kernel only uses DEFECT_SYS.

The alternate status records the relocated block number and the current status of that block.
Valid status values for the high order 4 bits are

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Name Value Meaning


REL_DONE 0.00 Software relocation completed
REL_PENDING 1.00 Software relocation in progress
REL_DEVICE 2.00 HW relocation requested
REL_DESIRED 8.00 relocation desired

Bad Block Detection and Recovery

When LVM encounters a bad block, it isolates the defective block to a region on disk of
DEV_BSIZE. Then an entry in the bad block directory is grabbed to begin the relocation
procedure.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM Physical Disk Layout

Magic/
LIF Header
Timestamp
PVRA
BDRA PVs in root vg
LIF Volume Root VG #
VGRA Root LV #s
Root PV list
User Root Lvols
Swap Lvols
Data Dump Lvols
State info
PVol List
Area Duplicate
Bad Block Pool BDRA & PVol
List

LVM METADATA - BDRA

All LVM boot disks contain an area called the “Boot Data Reserved Area (BDRA)”.

The Boot Data Reserved Area (BDRA) contains:


• Location and sizes of disks in root Volume group
• Knows locations of boot, root, primary swap and dump logical volumes
• Kernel uses this information to configure root and primary swap at boot-up

Maintained using lvlnboot and lvrmboot

Bad Block relocation is not supported on the boot, root, and primary swap logical volumes.

To boot the system, the kernel activates the volume group to which the system’s root logical
volume belongs. The location of the root logical volume is stored in the boot data reserved
area (BDRA). The boot data reserved area contains the locations and sizes of LVM disks in
the root volume group and other vital information to configure the root, primary swap, and
dump logical volumes, and to mount the root file system.

The BDRA contains the following records about the system’s root logical volumes:
timestamp (indicating when the BDRA was last written), checksum for validating data, root
volume group ID, the number of LVM disks in the root volume group, a list of the hardware
addresses of the LVM disks in the root volume group, indices into that list for finding boot,
root, swap, and dump, and information needed to select the correct logical volumes for root,
primary swap, and dumps.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM Boot Disk LIF Volume

LIF Volume Header


PVRA
Contains:
BDRA ISL
LIF Volume HPUX
V G RA AUTO
User
LABEL
Data

Area

(Bad Block Pool)

LVM METADATA – LIF AREAS

The Boot LIF area is in two parts: LIF header and LIF volume

LIF Header contains pointers to files in LIF volume

LIF volume contains:


• ISL - Initial System Loader
• HPUX - kernel loader
• AUTO - contains the autoboot string
• LABEL - used by HPUX to locate the root logical volume during a normal boot. Updated
by lvlnboot and lvrmboot commands.

These areas are only created by the mkboot command

CAUTION dd and lifcp should not be used to create a Boot Area. LVM information
(PVRA,BDRA) will be overwritten!

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM VGDA Disk Structures

LIF Header VG Header


Volume Group
PVRA Descriptor Area
LV entry [1]
.
BDRA Volume Group .
LIF Volume Status Area
.
VGRA Mirror
Consistency LV entry
User Record [maxlvs]
Duplicate PV entry [1]
Data Info. .
.
.
Area PV entry
Bad Block Pool [maxpvs]
VG Trailer

LVM METADATA – VGRA’S VGDA

The Volume Group Descriptor Area (VGDA) contains information regarding the individual
logical volumes. Each logical volume is assigned an entry number upon creation. The first
LV ENTRY will map to a minor number of 0xXX0001, where the XX is the assigned
volume group number. Notice the last byte is 01 for LV ENTRY number one (1).

The physical volume or physical volumes the logical volumes are contained on are listed in
the VGDA.

All of this information is on all of the physical volumes in the volume group.

The Volume Group ID (VGID) is also part of the VGDA.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM VGSA Disk Structures

LIF Header Max PEs


Volume Group
PVRA Descriptor Area Max PVs
BDRA PV Missing
Volume Group
LIF Volume Status Area Array Data
VGRA Pointers
Mirror
Consistency PV Stale
User Record Array Data
Duplicate Pointers
Info.
Data vgsa_magic
timestamp
Area
Bad Block Pool

LVM METADATA – VGRA’S VGSA

The Volume Group Status Area (VGSA) contains the current status of physical volumes and
extents for the volume group. Additionally, it has the maximum amount of physical volumes
and physical extents per physical volume allowed in this volume group.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Volume Group Identifier (VGID)

Software ID ( swid)

Timestamp

PVRA VGID
VGRA

vg01

The Volume Group Identifier (VGID) is comprised of two (2) words. The first word is the
machine software id (SWID) stored in Stable Storage. This strings should be unique among
ALL HP computers since it is keyed from the model number and serial number of a machine.

The second word is a timestamp at which time the vgcreate command was performed. The
VGID is stored in the PVRA and VGRA LVM metadata areas. The VGID is stored in the
/etc/lvmtab file on the system for each individual volume group.

If the VGID in the /etc/lvmtab file does NOT match the VGID for a volume group, that
volume group will NOT activate.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-10
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM "/etc/lvmtab"
Version
Zero (0x0000)
Volgrp Count
Reserved
Volgrp Pathname
VG ID
VG State
Repeated for
Incarnation # Each Volgrp
Pvol Count
Reserved
Pvol Pathname Repeated for
Reserved Each Pvol

LVMTAB FILE

LVM maintains a file that records information about the volume groups defined on the
system. This file is called /etc/lvmtab. The data from this file is loaded into memory and
written back out when changes are made to the LVM configuration.

Note that the kernel itself knows nothing of the lvmtab file or the in-core copy of its data.
It is the lvm commands that access this data. Each command reads the lvmtab into memory
to carry out the desired task and then writes back any changes.

The slide shows the layout of the lvmtab file on disk. The file begins with some header
information.

vers_no
zero
vg_cnt
reserved

The vers_no for lvmtab will always be LVMTAB_VERSNO (#1000). Prior to 10.0, this
field was an integer which caused the version number to be stored in the second half of the
word. To distinguish a 10.0 lvmtab from pre-10.0, the vers_no is now a short and the
second half of the word will always be zero. If we try to read a pre-10.0 version of lvmtab
it will fail because the vers_no will come back as zero.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-11
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

The vg_cnt field is the count of the number of volume groups defined in /etc/lvmtab.

Volume Group Information

The header is followed by a volume group entry for each volume group defined. This
includes:

vg_path
vg_id
vg_state
vg_incno
pv_cnt
reserved

The volume group records begin with the pathname for the volume group in vg_path. This
is the device file name for the volume group, such as /dev/vg00. The state of the volume
group is recorded in vg_state with possible values of

0x0 STANDARD
0x1 SHARED
0x2 EXCLUSIVE

The count for the number of physical volumes in the volume group is recorded into pv_cnt.

Physical Volume Information

Each volume group entry will also have a repeated physical volume entry for each pvol
belonging to the volume group

pv_path
reserved

The path name for the physical device file is recorded here in pv_path.

The contents of the lvmtab file are normally observed using the strings command; however,
xd can be used to get a hexadecimal dump. This can be used to find the VGID. xd -c will
give an ASCII dump.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-12
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

ioscan

# ioscan -fnCdisk

/dev/dsk/cXtYdZ
Card Instance
Target Address
LUN Number

IOSCAN

ioscan scans a system’s I/O structure and displays the paths and devices seen. There are
several options to ioscan.

I/O cards and devices are placed into classes. Classes include but are not limited to Mass
storage, MUX (tty), ext_dev, lan, processor, memory, etc. Each device is assigned an
Instance number beginning with 0. The Instance number is used in the device’s special file(s)
to access the device.

For mass storage devices, each device in the class is assigned a unique Instance number;
however, when choosing the appropriate instance number for the device’s special file(s), the
CARD Instance the device is attached to is used. An easy way to obtain a device’s proper
special file Instance number is

# ioscan -fnCdisk

The options for ioscan in this case are:

f - full listing
n - show device files
Cdisk - display only the disk class

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-13
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Displaying LVM Information

The contents of '/etc/lvmtab'


# strings /etc/lvmtab

Displays volume group to


physical volume
relationships

strings - find the printable strings in an object or other binary file

To see the contents of /etc/lvmtab type:

# strings /etc/lvmtab
(Displays volume group/physical volume relationships)

vg00
/dev/dsk/c0t6d0
/dev/dsk/c0t5d0
vg01
/dev/dsk/c0t4d0
/dev/dsk/c0t3d0
/dev/dsk/c0t2d0
/dev/dsk/c0t1d0

At the heart of the LVM configuration is the /etc/lvmtab file, which is read by all LVM
commands. /etc/lvmtab is not readable or editable on-screen.

The /etc/lvmtab file is run-time generated; that is, it is generated the first time you create
an LVM entity using SAM or commands such as vgcreate, and updated every time you
change the LVM configuration. Every configuration update or query reads the
/etc/lvmtab file.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-14
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM disk file names are recorded in /etc/lvmtab. If /etc/lvmtab file is destroyed,
all configuration operations involving LVM data structures become impossible.

You can recover the /etc/lvmtab file using vgscan.

WARNING Use vgscan with caution. It can cause problems when a volume group is
missing a physical volume, reversal of physical volume links, etc.

To list any swap areas contained in logical volumes, type:

# swapinfo

Kb Kb Kb PCT START/ Kb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 48560 4120 44440 8% 0 - 1 /dev/dsk/c0t5d0
dev 10240 0 10240 0% 0 - 1 /dev/vg00/lvol1

You must use the swapinfo command to see the swap area since bdf -b doesn’t show it.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-15
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Displaying LVM

For Physical Volumes


# pvdisplay [-v] /dev/dsk/cCtTdD

Displays information about


the physical volume(s)
specified

FOR PHYSICAL VOLUMES

# pvdisplay [-v] /dev/dsk/cxtydz

(Displays all physical disks searching for LVM physical volume information)

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-16
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Example:

# pvdisplay /dev/dsk/c0t6d0 /dev/dsk/c0t5d0

--- Physical volumes ---


PV Name /dev/dsk/c0t6d0
VG Name /dev/vg01
PV Status available
Allocatable yes
VGDA 2
Cur LV 1
PE Size (Mbytes) 8
Total PE 254
Free PE 0
Allocated PE 254
Stale PE 0
IO Timeout (Seconds) default
PV Name /dev/dsk/c0t5d0
VG Name /dev/vg01
PV Status available
Allocatable yes
VGDA 2
Cur LV 1
PE Size (Mbytes) 8
Total PE 254
Free PE 0
Allocated PE 254
Stale PE 0
IO Timeout (Seconds) default

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-17
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Displaying LVM Information

For Volume Groups


# vgdisplay [-v] /dev/vg*

Displays information about


all the volume groups

FOR VOLUME GROUPS


# vgdisplay /dev/vg*

This command displays a short list of information on all volume groups


# vgdisplay -v /dev/vgroot

The list contains information about the volume groups, logical volumes, and physical
volumes currently configured. Look at the following listing for a small system:
--- Volume groups ---
VG Name /dev/vgroot
VG Status available
Max LV 255 ç max. no. of LV’s allowed in VG
Cur LV 4 ç current no. of LV’s configured
Open LV 4
Max PV 16 ç max. no. of PV’s allowed
Cur PV 2 ç current no. of PV’s configured
Act PV 2 ç actual no. of PV’s activated
Max PE per PV 1016
VGDA 4
PE Size (Mbytes) 4
Total PE 479 ç total no. of PE’s in VG
Alloc PE 352 ç total allocated PE’s
Free PE 127 ç total no. of free PE’s
Total PVG 0

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-18
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

--- Logical volumes ---


LV Name /dev/vgroot/lvroot
LV Status available/syncd
LV Size (Mbytes) 92
Current LE 23
Allocated PE 46
Used PV 2
LV Name /dev/vgroot/usr
LV Status available/syncd
LV Size (Mbytes) 152
Current LE 38
Allocated PE 76
Used PV 1
LV Name /dev/vgroot/swap
LV Status available/syncd
LV Size (Mbytes) 152
Current LE 38
Allocated PE 38
Used PV 1
LV Name /dev/vgroot/scaff
LV Status available/syncd
LV Size (Mbytes) 400
Current LE 100
Allocated PE 100
Used PV 2

--- Physical volumes ---


PV Name /dev/dsk/c0t6d0
PV Status available
Total PE 157
Free PE 0
PV Name /dev/dsk/c0t5d0
PV Status available
Total PE 322
Free PE 127

In the example output shown above, you can see:

One volume group: /dev/vgroot


Four logical volumes: /dev/vgroot/lvroot, /dev/vgroot/usr,
/dev/vgroot/swap, /dev/vgroot/scaff
Two disks (physical volumes): /dev/dsk/c0t6d0, /dev/dsk/c0t5d0

The vgdisplay –v output also shows information about how large the logical volumes
are and about how the data they contain is allocated to the disks in terms of extents.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-19
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Displaying LVM Information

For Logical Volumes

# lvdisplay [-v] /dev/vg*/lvol*

Displays information about


all the logical volumes in all
the volume groups

FOR LOGICAL VOLUMES

# lvdisplay [-v] /dev/vg*/lvol*

(Displays all logical volumes in all volume groups)

# lvdisplay /dev/vg01/usr

--- Logical volumes ---


LV Name /dev/vg01/usr
VG Name /dev/vg01
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel

LV Size (Mbytes) 100


Current LE 25
Allocated PE 25
Bad block on
Allocation strict

NOTE If you use the –v option with lvdisplay to display detailed information
about one or more logical volumes it’s helpful to pipe the display command to
more.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-20
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

lvdisplay -v /dev/vg01/usr | more

Along with the general information about the logical volume, you can see the distribution of
its logical extents on physical volumes, how each logical extent corresponds to each physical
extent, and the status of that physical extent, current or stale, if it is a mirror copy.

For example, a verbose display might include something like:

# lvdisplay -v /dev/vg01/usr

--- Logical volumes ---


LV Name
.
.

--- Distribution of logical volume ---


PV Name LE on PV PE on PV
/dev/dsk/c0t6d0 25 25

--- Logical extents ---


LE PV1 PE1 Status 1
0000 /dev/dsk/c0t6d0 0028 current
0001 /dev/dsk/c0t6d0 0029 current
0002 /dev/dsk/c0t6d0 0030 current
0003 /dev/dsk/c0t6d0 0031 current
0004 /dev/dsk/c0t6d0 0032 current
0005 /dev/dsk/c0t6d0 0033 current
0006 /dev/dsk/c0t6d0 0034 current

In this listing, we can see that there are 25 logical extents in /dev/vg01/usr. Under the
Distribution heading, we can see that there is one disk, /dev/dsk/c0t6d0 (under PV
Name), that has the copies of the logical extents.

In the listing “--- Logical extents ---” we can see exactly how the data in the
logical volume are distributed to physical extents. Logical extent (LE) 0000 corresponds to
physical extent (PE1) 0028. If the data had been mirrored, there would be other columns
headed PE2 and PE3 to show where the other physical copies are located.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-21
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

lvlnboot
lvlnboot -R
LIF Volume Header 1
PVRA

BDRA LABEL
LIF Volume LABEL 2
VGRA BDRA
/

stand
1
vmunix
lvlnboot -r
-s
(Bad Block Pool)
-d
-b

LVLNBOOT

To display kernel devices on LVM Bootable Disks

# lvlnboot -v
(Displays root, primary swap, and dump logical volumes)

Example:

# lvlnboot -v /dev/vg00

Boot Definitions for Volume Group /dev/vg00:


Physical Volumes belonging in Root Volume Group:
/dev/dsk/c0t6d0 -- Boot Disk
/dev/dsk/c0t5d0 -- Boot Disk
/dev/dsk/c0t4d0
/dev/dsk/c0t3d0 -- Boot Disk
Boot: lvol1 on: /dev/dsk/c0t6d0
/dev/dsk/c0t5d0
Root: lvol3 on: /dev/dsk/c0t6d0
/dev/dsk/c0t5d0
Swap: lvol2 on: /dev/dsk/c0t6d0
/dev/dsk/c0t5d0
Dump: lvol2 on: /dev/dsk/c0t5d0, 0

The physical volumes (LVM disks) designated “Boot Disk” are bootable, having been
initialized with mkboot and pvcreate -B.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-22
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Multiple lines for lvol1 and lvol2 reveal that the root and swap logical volumes are being
mirrored.

Notice that lvol2 is being used for both swap and dump, but that mirroring applies to only
swap.

LABEL and BDRA are updated and created by lvlnboot


• -b updates the boot (/stand) logical volume definition.
• -r updates root logical volume definition.
• -s updates swap logical volume definition.
• -d updates dump logical volume definition.

NOTE All the above options update the BDRA and then the LABEL file

• -R updates LABEL files on all boot discs in the volume group specified with the contents
of the BDRA.

NOTE This option makes no changes to the contents of the BDRA: if it’s empty, it
will still be empty until one of the –b, -r, -s, or -d options are used.

There is also a -c option. This creates a file /stand/rootconf which is used for
Maintenance Mode boots (ISL> hpux –lm) on systems with a separate boot (/stand)
and root (/) logical volume. If it is missing, Maintenance Mode boots will not work. This
option is automatically run at normal boot time.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-23
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM Special Device Files

• Physical Volume
– /dev/[r]dsk/cXtYdZ
• Volume group
– group control file
• character mode, major 64, minor 0xYY0000
– logical volume files
• character and block mode, major 64, minor
0xYY00ZZ
Where: YY = volume group number (e.g. 00, 01, 02, …)
ZZ = logical volume entry number (e.g. 01, 02, …)

LVM SPECIAL DEVICE FILES

Physical Volumes

/dev/[r]dsk/cxtydz

Key Operation
[r] If present, indicates character (raw) access
x Card Instance number
y Target controller address
z Logical Unit Number (LUN)
NOTE The /etc/lssf command will only list Physical Volume device file
information. For all LVM device files in /dev/vg* use ll.

Example:

# lssf /dev/dsk/c0t5d0

sdisk card instance 0 SCSI target 5 LUN 0 section 0 at


8/0.5.0 /dev/dsk/c0t5d0

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-24
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Volume Group

/dev/vgXX/group

Key Operation
XX Volume group number (00...255)
group Must be called group

Logical Volume

/dev/vgXX/[r]lvolY

Key Operation
XX Integer volume group number (0...255)
[r] If present, indicates character (raw) access
Y Integer logical volume number (1...255)

NOTE For logical volumes which contain raw data, a name is recommended because
this helps to quickly identify the contents. Its name can be specified with
lvcreate -n and must not exceed 13 characters.

For example:

/dev/vgdatabase/rlvaccounts

Example:

# ll /dev/vg02

crw-rw-rw- 1 root root 64 0x020000 Sep 21 10:59 group


brw-r----- 1 root root 64 0x020001 Sep 21 10:59 lvol1
crw-r----- 1 root root 64 0x020001 Sep 21 10:59 rlvol1

Key Operation
64 Major number always 64
02 HEXADECIMAL volume group number
0000 always zeroes for group
01 HEXADECIMAL logical volume number

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-25
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

vgcfgbackup - metadata backup


• vgcfgbackup <vgname>

vg01

c0t5d0 c1t4d0 c1t4d1

lvol1 lvol3 lvol2

/etc/lvmconf/vg01.conf
lvol2 lvol4
lvol1

lvol3 lvol3
lvol3

VGCFGBACKUP

vgcfgbackup is used to backup LVM metadata structures for a volume group. It places
data into a User Data file. The default directory location is /etc/lvmconf. The default
filename is <vg_name>.conf.

vgcfgbackup is AUTOMATICALLY performed on any commend that changes a volume


group’s metadata. This can be turned OFF with the -A option to any commands which alter
the metadata.

A manual vgcfgbackup can still be performed at any time.

NOTE: Prior to HP-UX 10.X, a manual vgcfgbackup WAS required. No automatic


vgcfgbackup took place.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-26
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

LVM Activation

File System/
LVM
Swap

kernel

mount
vgchange
swapon

ACTIVATING VOLUME GROUPS

Accessing an LVM arrangement can be thought of as two switches.

First, the LVM switch must be created. Commands such as pvcreate and vgcreate will
create the switch for a volume group to a physical volume(s).

Once the switch has been created it must be closed. This is done through the vgchange
command. vgcreate automatically performs a vgchange.

Second, the areas on the physical volume must be defined. This is done through the
lvcreate/lvextend commands. If a volume group needs to be extended to another
physical volume, the vgextend will perform that action (after pvcreate of course).

Now that the logical volume has been created, if a filesystem is desired, newfs will be
performed.

To close the second switch, the mount command is typically used for filesystems or
swapon for swap areas. Raw databases access the logical volume through the database
vendor’s method.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-27
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

VG Activation

/sbin/bcheckrc /sbin/vmrc

/etc/lvmrc

AUTO_VG_ACTIVATE=1
. reads
.
.
RESYNC=“SERIAL”

VG ACTIVATION AT BOOTUP

A volume group can be activated manually or automatically. The manual method is by


performing the vgchange –a y at the system prompt.

The automatic method is performed at boot time. This involves the init process. When a
system boots, it goes through the poweron selftests, choice of boot path, loading of ISL, and
execution of the HPUX loader which loads a kernel. Once loaded, the kernel invokes the
init process. init reads the /etc/inittab file. One of the processes in
/etc/inittab is /sbin/bcheckrc.

The purpose of /sbin/bcheckrc is to check for file system integrity of all file systems
listed in /etc/fstab. In order to check logical volume file systems, the appropriate
volume group MUST be activated.

/sbin/bcheckrc calls /sbin/lvmrc to perform the Volume group activation and


resynchronization.

• /etc/lvmrc contains variables that control /sbin/lvmrc's behavior.


– AUTO_VG_ACTIVATE
If set to one then the VG's are automatically activated. If set to zero then
this allows a custom VG activation.

Normally customization is not necessary though it may be desired in specific applications


(such as MC/ServiceGuard) where a volume group must not be automatically activated.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-28
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

Unsupported Tools
l
LVMcollect.1X
l capture
l sysinfo
l lvmXX

File or printer

Volume group

LVMCOLLECT

WARNING The UNSUPPORTED tools can cause system performance degradation. They
could, in theory, cause data corruption since the lab has not thoroughly tested
them. USE THEM WISELY and CAUTIOUSLY!!!!

The LVMcollect program is an UNSUPPORTED shell script which collects information


about a system. Its original intent was to collect all the data as to how a system’s lvm
configuration was setup. It does, however, include additional system information.

It is available via ftp through a variety of sources on the HP NET.

CAUTION There is a 9.X and 10.X version available. Make sure the appropriate version
is obtained.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-29
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

The following is a list of available options:

Run only a part of LVMcollect:


LVMcollect -p[1|2|3|4|5|6|7|8|9]
-p4 <volume_group_name
where: -p1 System Configuration
-p2 Volume Groups
-p3 Physical Volume Groups
-p4 Logical Volumes
-p5 Physical Volumes
-p6 File Systems and Swap Space
-p7 Root / Primary Swap / Dumps / Kernel Configuration
-p8 LVM Device Files
-p9 Other
Run LVMcollect to get a detailed listing (verbose mode):
LVMcollect –v

• There are other programs available such as capture and sysinfo which provide
similar information.
• The lvm10 and lvm11 programs display the LVM metadata on a physical volume.

NOTE All of these programs are UNSUPPORTED by the HPUX Labs.

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-30
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 4: Structures

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


4.doc HSD Field Development
4-31
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

Module 5

Lab

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

SETUP FOR DAY 1 LABS

Day 1 labs require an HP9000 computer with HP-UX 10.20 or HP-UX 11.0 installed. A
minimum of two (2) volume groups is needed, vg00 and a second (e.g. vg01). vg00 can have
the standard logical volumes while vg01 should have at least one (1) logical volume setup with
a filesystem (hfs or vxfs).

Record your assigned system information:

System Name: __________________________________


System I.P Address: _____________________________
Console I.P. Address: ____________________________
Root password: _________________________________
TASK 1: OBTAINING AND DISPLAYING INFORMATION

The purpose of this lab is to become familiar with displaying LVM metadata structures through
the display commands, observing the contents of the /etc/lvmtab file, and the special
files associated with LVM.

1. Using ioscan, identify physical volumes available on the system. These may or may
not be currently in use by LVM or another process.

# ioscan -fnC disk

NOTE More detailed information on ioscan can be found in the HP-UX 10.X CE
Handbook.

You should get an output similar to the one below. Keep in mind that this will vary from
machine to machine because of different machine classes (e.g. HP K class, HP V class,
HP N class, etc) and the amount of I/O devices attached to the machine.
# ioscan -fnCdisk
Class I H/W Path Driver S/W State H/W Type Description
=====================================================================
disk 0 8/4.5.0 sdisk CLAIMED DEVICE SEAGATE ST34573WC
/dev/dsk/c0t5d0 /dev/rdsk/c0t5d0
disk 1 8/4.8.0 sdisk CLAIMED DEVICE SEAGATE ST34573WC
/dev/dsk/c0t8d0 /dev/rdsk/c0t8d0
disk 2 8/16/5.2.0 sdisk CLAIMED DEVICE HP DVD-ROM 6x/32x
/dev/dsk/c1t2d0 /dev/rdsk/c1t2d0

This output was from a HP D class machine. The ioscan command is used to scan and
display the I/O on a machine. Using the options fnC disk alters the default output of
ioscan to show full output (-f), include special files (-n), and show only those
Date 13-Nov-00 CES2-DISTANCELVM
5.doc HSD Field Development
5-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

devices in the disk class (-C disk). It is important to use the n option to ensure you
get the appropriate CARD INSTANCE NUMBER for use in the special file (e.g. c0,
c1, …). The DEVICE INSTANCE NUMBER is displayed with ioscan in the
INSTANCE (I) column. This is NOT the same as the disk’s special file (e.g. c0t8d0)
CARD INSTANCE NUMBER.

From this output we can see we have two (2) physical volumes to work with. The first
device is at hardware path 8/4.5.0, is identified as a SEAGATE ST34573WC device,
and uses special files of /dev/[r]dsk/c0t5d0.

Breaking down the special file equates to:


File locations: /dev/dsk (BLOCK device) and
/dev/rdsk (Character device)
Card Instance Number: 0
Target Controller Address: 5
LUN Number: 0

The Description field can also be obtained by using the diskinfo command.

# diskinfo /dev/rdsk/c0t5d0
SCSI describe of /dev/rdsk/c0t5d0:
vendor: SEAGATE
product id: ST34573WC
type: direct access
size: 4194157 Kbytes
bytes per sector: 512

The second device is similar to the first but is at a different target address and has a
different CARD INSTANCE NUMBER. The third, the DVD-ROM, is a READ ONLY
device and cannot be used in LVM or other WRITE applications.

From your output, identify the following:

Hardware Address Special Files Description Potential


LVM
Device?
(Y/N)

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

2. The previous step displays what potential physical volume devices could be claimed by
LVM. However, we do not know which of these devices, if any, are currently claimed.
To display this, we will display the contents of the /etc/lvmtab file. We MUST use
the strings command since the /etc/lvmtab file is a data file, not a text file.

# strings /etc/lvmtab

This should result in an output similar to the following:

# strings /etc/lvmtab
/dev/vg00
/dev/dsk/c0t5d0

This example shows there is one (1) volume group (/dev/vg00) configured on this
system and it is claiming physical volume /dev/dsk/c0t5d0.

From your output, identify the following:

Volume Group Name Physical Volumes

NOTE strings will only display ASCII text information. There is more information
in the /etc/lvmtab file than what is displayed.

3. So far, we have displayed potential physical volumes for LVM use and have identified
any volume groups and physical volumes already configured. Next we will use the
pvdisplay command to display some of the LVM metadata structures on a physical
volume.

Metadata is underlying information which contains how and where data is to be


stored, pointers to the data, etc. There are several types of metadata in HP-UX.

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

These include LVM structures, file system information (e.g. superblock, data
blocks, inodes, etc) and disk structures (e.g. sectors, cylinders, etc.).

Users do not typically see metadata information. The data they see or manipulate
is USER DATA. Each file from a listing of files on an HP-UX system are USER
DATA files.

From step 2, identify a physical volume configured in the root volume group
(/dev/vg00) and display the LVM metadata structures.

# pvdisplay /dev/dsk/cXtYdZ
where:
X=card instance number
Y=controller address
Z=LUN number

# pvdisplay _____________________

From your output, identify the following:

--- Physical volumes ---


PV Name _______________
VG Name _______________
PV Status _______________
Allocatable _______________
VGDA _______________
Cur LV _______________
PE Size (Mbytes) _______________
Total PE _______________
Free PE _______________
Allocated PE _______________
Stale PE _______________
IO Timeout (Seconds) _______________

The man pages for pvdisplay explains each of these values. Review the
following for their respective meaning:

VG Name: Identifies what volume group this physical volume belongs to


Allocatable: Determines whether any fee extents on this physical volume can be
allocated to a logical volume
VGDA: The number of Volume Group Descriptor Areas found on this physical
volume. Normally, there should be two (2) per physical volume.
Cur LV: Tells how many logical volumes are currently configured on this physical
volume
PE Size (Mbytes): Displays the physical extent size, in Megabytes, for this
physical volume and the assigned volume group.

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

Total PE: The total number of physical extents on this physical volume.
Multiplying this number by the PE Size will show how much USER
DATA space is available on this physical volume.
Free PE: Displays how many physical extents are free on this physical volume
which can be allocated to a new or existing logical volume
Allocated PE: Displays how many physical extents are already claimed by existing
logical volumes. This value plus the FREE PE should equal TOTAL PE.
Stale PE: Shows how many physical extents which do not contain the most current
USER DATA. This is primarily for mirroring.
IO Timeout (Seconds): The timeout value which the LVM driver will attempt to
access this physical volume before switching to an alternate, if available.
Default value for most drivers is 30 seconds.

4. Now we will add the -v (verbose) option to pvdisplay to obtain more information.
Pipe it to the more command to observe the output.

# pvdisplay –v _________________ | more

This output should display additional information. Identify the following:

--- Distribution of physical volume ---


LV Name LE of LV PE for LV
_______________ _____ _____
_______________ _____ _____
_______________ _____ _____
_______________ _____ _____
_______________ _____ _____
_______________ _____ _____
_______________ _____ _____
_______________ _____ _____

--- Physical extents ---


PE Status LV LE
0000 _______ _______________ ____
0001 _______ _______________ ____
0002 _______ _______________ ____
0003 _______ _______________ ____
0004 _______ _______________ ____

NOTE Just observe the Rest!!

The additional information displayed from the

“--- Distribution of physical volume ---”

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

area shows what the logical volume names are on this physical volume and how
many logical and physical extents are allocated to this logical volume on this
physical volume. Be aware, the logical volume could contain more extents on
other physical volumes.

Under the “--- Physical extents ---” area, the relationship of the
Physical Extent number (PE) to the Logical Extent number (LE) for a logical
volume name is displayed. The STATUS of that extent is also displayed.
Normally the STATUS should be CURRENT. STALE indicates that PE is in
need of being updated in a mirror condition. FREE indicates that PE has NOT
been allocated to a logical volume.

5. Next, we will display information about the volume group. The vgdisplay command
is used to provide this information.

# vgdisplay <vgname>

Execute the vgdisplay command for the root volume group (e.g. /dev/vg00).
Identify the following information from your output:

--- Volume groups ---


VG Name ___________
VG Write Access ___________
VG Status ___________
Max LV ___________
Cur LV ___________
Open LV ___________
Max PV ___________
Cur PV ___________
Act PV ___________
Max PE per PV ___________
VGDA ___________
PE Size (Mbytes) ___________
Total PE ___________
Alloc PE ___________
Free PE ___________
Total PVG ___________
Total Spare PVs ___________
Total Spare PVs in use ___________

The man pages for vgdisplay contains detailed information about the different
values. Review the following list for their respective meaning:

VG Write Access: Displays whether the volume group has been activated in
read/write or read-only mode
VG Status: This will ALWAYS be AVAILABLE since you cannot display
an unavailable volume group.

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

Max LV: The maximum number of logical volumes this volume group can
have. The default is 255. This can be changed at volume group
creation time from a value of 1 to 255.
Cur LV: Displays how many logical volumes have been created for this
volume group but NOT necessarily currently available. For
instance, a logical volume may be unavailable because it spans
more than on e physical volume and one of the PVs is missing.
Open LV: Displays how many logical volumes are currently available.
Max PV: Maximum number of physical volumes allowed in this volume
group. The default is sixteen (16). This can be changed at volume
group creation time.
Cur PV: The number of physical volumes claimed by the volume group.
This includes any PVs not available (e.g. failed drives).
Act PV: The number of available PVs claimed by this volume group.
Max PE per PV: The maximum number of Physical Extents any one physical
volume can have in this volume group. This can be changed at
volume group creation time only.
VGDA: The number of Volume Group Descriptor Areas found for this
volume group. Normally, there should be two (2) per physical
volume. If a volume group has 4 physical volumes, this number
should be eight (8).
PE Size (Mbytes): The Physical Extent size for this volume group. This can only
be set at volume group creation time. Values range from 1, 2, 4,
8, 16, 32,64, 128, and 256 Mbytes.
Total PE: Displays the total number of physical extents in this volume
group. This includes all PEs from every PV in the volume group.
Alloc PE: Displays all PEs in the volume group which have been assigned
to logical volumes in this volume group.
Free PE: Displays all PEs available to be allocated to logical volumes in
this volume group.
Total PVG: Displays the total number of configured Physical Volume
Groups in this volume group. PVGs can be used in LVM
mirroring.
Total Spare PVs: Displays the total number of spare physical volumes assigned to
this volume group. Spare PVs provide redundancy when a PV
fails in the volume group. This is available on HP-UX 11.0 and
above.
Total Spare PVs Displays the number of spare PVs in this volume group which
in use: are currently being used because of a PV failure. (HP-UX 11.0
and above)

6. Like the pvdisplay command, the vgdisplay command also has a -v (verbose)
option. Include -v with vgdisplay and observe the output.

# vgdisplay –v _______________ | more

Identify the following fields from the output:


Date 13-Nov-00 CES2-DISTANCELVM
5.doc HSD Field Development
5-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

--- Logical volumes ---


LV Name _______________
LV Status _______________
LV Size (Mbytes) _______________
Current LE _______________
Allocated PE _______________
Used PV _______________

LV Name _______________
LV Status _______________
LV Size (Mbytes) _______________
Current LE _______________
Allocated PE _______________
Used PV _______________
LV Name _______________
LV Status _______________
LV Size (Mbytes) _______________
Current LE _______________
Allocated PE _______________
Used PV _______________

NOTE You may have more or less LVs than the space allocated.

--- Physical volumes ---


PV Name _______________
PV Status _______________
Total PE _______________
Free PE _______________

NOTE You may have more or less PVs than the space allocated.

The -v option displays logical volume information in addition to the normal


volume group information. This includes the logical volume name, status, size,
and physical volume(s) the logical volume is located on.

Additionally, information about each physical volume in the volume group is


displayed. This includes the amount of available and free PEs and the PVs
current status.

7. The next display command available is lvdisplay which displays logical volume
information. The lvdisplay command requires the full path of the logical volume’s
BLOCK device special file. The following is an example:

# lvdisplay /dev/vg00/lvol1

Execute this command on one of your logical volumes.

# lvdisplay _____________________
Date 13-Nov-00 CES2-DISTANCELVM
5.doc HSD Field Development
5-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

Identify the following values:

--- Logical volumes ---


LV Name /dev/___________
VG Name /dev/___________
LV Permission ________________
LV Status ________________
Mirror copies ________________
Consistency Recovery ________________
Schedule ________________
LV Size (Mbytes) ________________
Current LE ________________
Allocated PE ________________
Stripes ________________
Stripe Size (Kbytes) ________________
Bad block ________________
Allocation ________________
IO Timeout (Seconds) ________________

The man pages for lvdisplay contains detailed information about the
different values. Review the following list for their respective meaning:
LV Name: The full BLOCK device path name of this logical volume
VG Name: The volume group this logical volume is assigned to.
LV Permission: Whether this logical volume is read/write or read-only.
LV Status: Displays whether this LV is available or unavailable and whether
the extents are current or stale if it is mirrored.
Mirror copies: Displays how many mirror copies have been created for this LV.
No mirrors is “0”,
Consistency The recovery method (MWC, NOMWC, NONE) to use if this
Recovery: logical volume is mirrored. Default is MWC.
Schedule: Method to schedule writes to logical volumes. It can be striped,
sequential, or parallel.
LV Size (Mbytes): The size, in Megabytes, of this logical volume.
Current LE: Current number of logical extents for this logical volume.
Allocated PE: The number of allocated physical extents for this logical volume.
Stripes: The number of stripes this logical volume is using. A “0”
indicates the LV is NOT striped.
Stripe Size: The size, in Kbytes, for which striping is configured.
(Kbytes)
Bad block: Will this logical volume use LVM’s disk sparing utility if
needed? A boot, root, swap, or dump logical volume MUST
Date 13-Nov-00 CES2-DISTANCELVM
5.doc HSD Field Development
5-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

disable the BAD BLOCK RELOCATION because of contiguous


access.
Allocation: The location of how extents can be allocated for a specific
logical volume.
IO Timeout: This is similar to the Physical Volume timeout but is related to
(Seconds) a specific logical volume. Differences between the PV’s timeout
could cause a difference in desired results. Normally changing
the LV timeout is discouraged.

8. As with the other display commands, lvdisplay has the -v, or verbose, option.
Include -v with lvdisplay and observe the output. Remember to use the BLOCK
special file of the logical volume.

# lvdisplay –v __________________________ | more

Identify the following values:

--- Distribution of logical volume ---


PV Name LE on PV PE on PV
_______________ __ __

--- Logical extents ---

LE PV1 PE1 Status 1


____ _______________ ____ ________
____ _______________ ____ ________
____ _______________ ____ ________
____ _______________ ____ ________
____ _______________ ____ ________
____ _______________ ____ ________
____ _______________ ____ ________

NOTE You may have more or less PVs and LEs than the space allocated.

The output from lvdisplay –v additionally includes the Physical Volume(s)


the logical volume is allocated on and displays the Logical- to-Physical extents
mapping. The Logical extents is the order of extents allocated in the logical
volume, beginning with LE 0000. The Physical extent is the actual physical
extent the logical extent is mapped to on a specific physical volume. This does
NOT have to begin with 0000.

9. Next, the lvlnboot command will be used to display boot information about the root
volume group’s boot information. lvlnboot has several options; however, the -v
option is the only one which displays information. All other options will modify
information.

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-10
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

A root volume group is one which contains a boot area (LIF which contains
ISL, HPUX, AUTO, LABEL, …) on the first physical volume in the volume
group. There could be other bootable physical volumes in the root volume group
(e.g. mirrors).

Execute the following command. Notice there are NO special files associated
with the command:

# lvlnboot –v

Observe the following information.

Boot Definitions for Volume Group _________:


Physical Volumes belonging in Root Volume Group:
__________________ (_______) -- Boot Disk
Boot: _____ on: _______________
Root: _____ on: _______________
Swap: _____ on: _______________
Dump: _____ on: _______________, _

The first line shows the name of the volume group. Typically this is
/dev/vg00.

The second and third lines display the Physical Volume associated with the root
volume group. If this were a mirrored root physical volume, there would be
additional lines for each mirror.

The Boot line indicates this is the boot logical volume. The boot logical volume
contains the kernel (/stand/vmunix). It is the /stand filesystem.

NOTE The Boot logical volume is only on HP-UX 10.20 and above releases. This was
to allow the root (/) filesystem the capability to be a VxFS (Journaled)
filesystem. However, the kernel filesystem (/stand) MUST be filesystem type
HFS.

The Root line indicates the logical volume for the root (/) filesystem.

The Swap line indicates the PRIMARY swap logical volume and the Dump line
indicates any defined logical volume dump areas. The number at the end of a
Dump line indicates the order in which the areas will be used in case of a dump.

10. The next piece of information to observe is the special files created in an LVM
environment. Some of these are created by the system administrator while others are
created by the LVM lvcreate command.

When creating a volume group, a unique directory must be created for the
volume group. The directory entry belongs in the /dev directory and will be the

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-11
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

name of the volume group. The volume group can be any unique name; however,
the default naming convention is vgXX, where XX is a unique number starting at
00 (e.g. vg00, vg01, vg02, …).

Once the directory has been created, a control file named group must be created
in the volume group’s directory. The mknod command is used to create the
control file since the control file is a CHARACTER special device file.

Change directories to volume group vg00 and list in long format the volume
group’s control file:

# cd /dev/vg00
# ll group

This should result in a similar output to the following:

NOTE Your date will be different.

crw-r----- 1 root sys 64 0x000000 May 12 1998 group

Enter your output below:

_________________________________________________

From the example output, let’s review each field.

crw-r----- The c indicates this is a character special file


(transfers data in character streams versus block).
The owner has read/write privileges (rw-), the
group has read privilege (r--), and everyone else
has no privileges (---).

1 There is one link to this file (the directory).

root User root owns this file.

sys This file belongs to the group of sys users.

64 The lvm driver major number is 64. For a listing


of driver major numbers, execute the lsdev
command.

0x000000 This is the MINOR number. This is very important


when creating the volume group’s control file.
It MUST be unique among ALL of the volume
groups on the system. All characters are the same

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-12
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

EXCEPT for the first byte after the “0x” The “0x”
signifies this is a HEXADECIMAL number.
The first byte after the “0x” ( in this case “00”) is
the UNIQUE volume group number assigned to
this volume group. This number should be
incremented, in HEX, as new volume groups are
created.
May 12 1998 This is the date this file was created or last
modified.
group This is the name of the file.

11. Now let us observe the logical volume special files and discuss the differences. Execute
the long listing command but do not specify any filename.

# ll

You should see several special files. List them below EXCEPT for the group
file. Since the owner, group, and major numbers should be the same, ONLY
include the mode/permissions, MINOR number, and filename.

Mode/Permissions Minor Number Logical Volume


Name

Each logical volume has two (2) special files associated with it, a block and a
character special file. This is standard for any MASS STORAGE DISK device,
although some mass storage disk devices may have more.

The “character” device special file begins with a c as discussed in step 10 on the
group file. Typically the character special file is only used for:
Date 13-Nov-00 CES2-DISTANCELVM
5.doc HSD Field Development
5-13
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

• initially creating the filesystem (newfs)


• performing filesystem maintenance (fsck)
• “raw” filesystem access, such as a database

The “block” device special file is used to attach a filesystem to a


mount_point_directory for blocks of data transfer. This attachment is
accomplished by the mount command and is logically set in physical memory.

The MINOR number has the same format as discussed with the group file. The
difference has to do with the last byte in the minor number. This is the Logical Volume
Entry number assigned to this logical volume in this volume group. The combination of
the first byte and last byte defines the volume group this logical volume is part of and
the logical volume number assigned in this volume group.

For example, MINOR number:

0x030004

indicates this logical volume is owned by the volume group assigned Volume
Group number 03 and is the fourth logical volume (04) created in this volume
group. Remember, the Volume Group number is assigned when the volume
group control file (group) is created by mknod and then the volume group is
created (vgcreate). This number is stored in the HP-UX kernel.

Summary

There are several pieces of information to be obtained within Logical Volume


Manager. Since LVM uses MASS Storage Disk/LUN devices, it is important to
understand how to obtain system information for these devices. The ioscan
command provides this feature.

To determine what volume groups are currently configured on a system, the


strings /etc/lvmtab command is used to display the volume group
names and their respective physical volumes.

There are the LVM display commands, pvdisplay vgdisplay


lvdisplay which display physical volume, volume group, and logical
volume information.

To obtain information about a boot volume group, the lvlnboot command is


used.

LVM special files look identical to any other special files but the MINOR
number contains the volume group assigned number and the logical volume
entry number. It does NOT contain any specific path information to a physical
volume.

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-14
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

TASK 2: VOLUME GROUP ACTIVATION

The purpose of this task is to automatically and manually activate volume groups.

1. In order to access a logical volume, the logical volume’s volume group MUST be
activated first. This is done by the vgchange command.

Perform the man vgchange command and identify the following options and
their meanings:

# man vgchange

Option Description
-a y
-a n
-q y
-q n

The -a y option will attempt to activate a volume group while the -a n option
attempts to deactivate a volume group. The -q option is used to override
quorum. Quorum is greater than (>) 50% of all physical volumes available in the
volume group.

2. When a system is booted, the HP PA RISC boot process is as follows:

NOTE This is an overview, not all processes are defined!!!

1. System is powered on, goes through Power On SelfTests (POST)

2. Processor Dependent Code (PDC) calls the Initial Program Loader


(IPL) to boot from a device. Typically this is the Primary Boot Path,
particularly if AUTOBOOT is ON.

3. Assuming a Logical Interchange Format (LIF) area is found and it


contains an Initial System Loader (ISL) program, the ISL is loaded
into memory.

4. The ISL will attempt to boot from a boot string. If AUTOBOOT is


ON, the contents of the AUTOBOOT file will be used; otherwise,
someone must input a boot string. The typical boot string is hpux
which will load and execute the hpux bootstrap loader program from
the LIF area.

5. The hpux bootstrap program will attempt to load a kernel


(/stand/vmunix as default) from the boot disk.
Date 13-Nov-00 CES2-DISTANCELVM
5.doc HSD Field Development
5-15
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

6. After the kernel loads into memory, it executes the init process
which reads the /etc/inittab (initialization table) file.

7. Several items occur from the /etc/inittab file including the


default run state, initialization of I/O, etc. One of these items is to
check the file system integrity of file systems listed in etc/fstab.

The checking of file system integrity is done from the /sbin/bcheckrc


script called in the /etc/inittab file. In order for logical volume file
systems to be checked by fsck, the volume group(s) MUST be activated first.
/sbin/bcheckrc calls another program, /sbin/lvmrc to do this.
/sbin/lvmrc determines what volume groups to activate by reading
/etc/lvmrc.

The system default is to ACTIVATE ALL volume groups listed in


/etc/lvmtab because /etc/lvmrc has the variable
AUTO_VG_ACTIVATE=1. The one sets this to TRUE.

Use the grep command to determine the current value of


AUTO_VG_ACTIVATE.

# grep AUTO_VG_ACTIVATE /etc/lvmrc


AUTO_VG_ACTIVATE = _____

WARNING In order to see the bootup messages, you MUST be on the system’s console.

Reboot your machine.

# shutdown –r 0

Try to watch the messages as the system reboots and look for the /sbin/bcheckrc
message. You may have to wait until the system has rebooted and then scroll your
screen up.

Use the vgdisplay command to determine what volume groups are activated.

# vgdisplay | more

Activated volume groups: __________________________________

WARNING Do NOT use the arrow keys unless you are absolutely certain you are using an
HP Term type emulator with an HP-UX HP TERM type.
Date 13-Nov-00 CES2-DISTANCELVM
5.doc HSD Field Development
5-16
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

3. Edit the /etc/lvmrc file. Change the value of AUTO_VG_ACTIVATE to 0. For


those unfamiliar with vi, follow the directions given.
# vi /etc/lvmrc
/AUTO_VG_ACTIVATE çsearch for this pattern
AUTO_VG_ACTIVATE=1 çusing the spacebar, place the cursor
under the 1, type in r0 (replace a
single character with a 0)
:wq! çgo to command mode, write and
quit the file.

4. Reboot your machine again. Observe the bootup messages as in step 2.

Once the system has booted, execute the vgdisplay command to determine
what volume groups are activated.

# vgdisplay | more

Activated volume groups: __________________________________

Assuming there is more than one volume group created on this system, you
should see only the root volume group (/dev/vg00) activated.

5. Try to display volume group information for your second volume group (e.g.
/dev/vg01).

# vgdisplay <vgname>

This should result in :

vgdisplay: Volume group not activated.


vgdisplay: Cannot display volume group “vg01”.

6. Try to activate the volume group:

# vgchange –a y vg01

If successful, this should result in:

Activated volume group


Volume group “vg01” has been successfully changed.

7. Ensure there is a filesystem in the /etc/fstab file associated with this volume group:

# grep vg01 /etc/fstab

List only the “mount_point_directory(ies)” and logical volume(s)below:

Mount_point_directory Logical volume

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-17
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

8. Mount all filesystems associated with the second volume group:

# mount /<mount_point_directory>
or
# mount –a

9. Try to deactivate the second volume group using the vgchange command.

# vgchange –a n vg01

Assuming step 8 mounted all filesystems associated with this volume group, you
should have received the following error:

vgchange: Couldn’t deactivate volume group vg01”:


Device busy

10. Unmount the filesystems associated with the second volume group.

# umount /<mount_point_directory>
# umount /<mount_point_directory>

11. Try to deactivate the second volume group using the vgchange command.

# vgchange –a n vg01

If all filesystems were unmounted for this volume group, you should have
received the following message:

Volume group “vg01” has been successfully changed.

Summary

In order for logical volumes to be accessible, the volume group they are
contained in MUST be activated. This is achieved by the vgchange command.

By default, all volume groups listed in /etc/lvmtab are activated on bootup.


This is because of the AUTO_VG_ACTIVATE variable being set to 1 in the
/etc/lvmrc file. There are some instances where this MUST be disabled.
Hewlett Packard’s MC/SERVICEGUARD product is a perfect example.

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-18
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

TASK 3: COLLECTING DATA

The purpose of this lab is to become familiar with collecting LVM information on a system. It is
important for a customer to keep a record of how their system is installed. This information can
be used to rebuild a system in the unlikely event of a catastrophic data loss situation. One way
to accomplish this is to manually execute all of the commands discussed in TASK 1, as well as
some others.

However, it is much easier to put these commands into a program or script. There are several
floating around Hewlett Packard. All are officially UNSUPPORTED although they are used on
a daily basis.

One such shell script is LVMcollect. You need to get the appropriate version for the
operating system (e.g. 9.X, 10.X, 11.X).

1. Change directories to /usr/contrib/bin.

# cd /usr/contrib/bin

2. Determine the operating system release for your system.

# uname –r

Release: _____________

3. Anonymous ftp to the appropriate server (e.g. 15.32.65.248) and load the appropriate
LVMcollect script (e.g. LVMcollect.10). Make the script executable.

# ftp 15.32.65.248
login: ftp
Password: ftp
ftp> cd dist/LVMcollect
ftp> get LVMcollect.XX ç replace the XX
ftp> quit
# chmod 555 LVMcollect.XX

4. Obtain a listing of available options.

# LVMcollect.XX ?

Execute the program obtaining all information and redirect the output to a file.

# LVMcollect.XX -v > /tmp/lvmcollect.tmp

NOTE This WILL take some time!!!!

Briefly view the information.

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-19
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

# more /tmp/lvmcollect.tmp

Summary

There are several ways to collect LVM and system information. One way is to
manually type in the commands; however, several UNSUPPORTED scripts or
programs have been written to make this easier. One example is the
LVMcollect program. Others include capture and sysinfo. There are
others as well. Just keep in mind they are UNSUPPORTED and use them wisely.

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-20
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

TASK 4: DO IT ON YOUR OWN

To this point you have been given all the commands to display information and
activate/deactivate volume groups. In this task, try performing those tasks without the
commands given. You can always refer back to previous sections.

1. Obtain a listing of all disk drives on your system:

Command: ________________________________

2. Determine what volume groups and their respective physical volumes are configured on
this system:

Command: ________________________________

3. Obtain information about the physical volumes attached to the root volume group.
Obtain a short and a long listing.

Command: ________________________________

Command: ________________________________

Compare this to a physical volume in a different volume group.

4. Obtain a long and short listing of ALL volume groups on your system:

Command: ________________________________

Command: ________________________________

Command: ________________________________

Command: ________________________________

5. Obtain long and short listing information for 3 logical volumes 1, 3, and 7 in the root
volume group.

Command: ________________________________

Command: ________________________________

Command: ________________________________

Command: ________________________________

Command: ________________________________

Command: ________________________________

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-21
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

6. Obtain a long and short listing for logical volume 1 in the second volume group:

Command: ________________________________

Command: ________________________________

7. Display the volume group control file for each volume group on the system. Identify the
MAJOR number and volume group number used by each volume group.

Command: ________________________________

Major Number: ________________________________

Volume Group Number: ________________________________

Command: ________________________________

Major Number: ________________________________

Volume Group Number: ________________________________

8. Display the special files for all logical volumes in the root volume group. Determine the
Volume Group and Logical Entry number for all:

Command: ________________________________

Volume group Number: ________________________________

LV Entry Numbers: ________________________________

9. Display the special files for all logical volumes in the second volume group. Determine
the Volume Group and Logical Entry number for all:

Command: ________________________________

Volume group Number: ________________________________

LV Entry Numbers: ________________________________

10. Try to deactivate the second volume group. If it fails, unmount all file systems and try
again:

Command: ________________________________

Command: ________________________________

Command: ________________________________

Command: ________________________________

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-22
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

11. Activate the second volume group:

Command: ________________________________

12. Modify the /etc/lvmrc file to make all volume groups automatically activate at boot
time. Record your change bellow:

Change: ________________________________

13. Reboot the system and verify your change.

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-23
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 5: Lab

This page left intentionally blank.

Date 13-Nov-00 CES2-DISTANCELVM


5.doc HSD Field Development
5-24
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

Module 6

Creating a Logical
Volume

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

Procedural Overview For Logical


Volume Creation

• Examine current vg and lvol configuration


• Determine new lvol characteristics
• Determine new file system characteristics
• Create new logical volume
• Create file system in new logical volume
• Mount new file system

This procedure outlines the tasks associated with creating a new logical volume in an existing
volume group. Each of the items in the procedure will be explained in more detail in this
module.

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

Prior to creating a new logical volume, there are a number of things you must know about the
volume group and its physical volumes. You need to know the answers to the following
questions.

• In which volume group will the logical volume be created?

• Is that volume group activated?

• Does the volume group contain enough free extents to hold the new logical volume?

• If the logical volume will have a strict or contiguous allocation policy, are there enough
free extents on a physical volume to hold the logical volume and do they meet any
contiguous requirement?

• What logical volume names currently exist in the volume group?

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

Determining new logical volume


characteristics

• name
– standard or non-standard
• size
– number of extents or megabytes
• allocation policy
– strict or non-strict
– contiguous or non-contiguous

Prior to creating the new logical volume, there are a number of things that have to have to be
determined in advance.

You need to know the desired name for the logical volume. Will it have the next available
standard name (lvolX), or will it have non-standard name like mylvol?

The desired size of the new logical volume also needs to be known. Remember that actual
logical volume sizes will in increments of the physical extent size for the volume group.

When the logical volume is created, you must specify the allocation policy for the logical
volume. Will the policy be strict or non-strict? Will a contiguous or non-contiguous policy
be used?

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

Determining File System


Characteristics

• File system type


– HFS or JFS
• Mount point directory
• Type of mounting
– manual (one time) or automatic (at bootup)

In order to create and mount a file system, you need to know the answers to several
questions.

Will the file system be a HFS or JFS file system?

What mount point directory will be used for the new file system?

Should the file system be automatically mounted at bootup or will it be manually mounted
each time it is needed?

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

Creating New Logical Volume and


New File system

• Creating new logical volume


– lvcreate
• Creating new file system
– newfs
• Mounting new file system
– vi /etc/fstab
– mount

CREATING A NEW LOGICAL VOLUME

Use the lvcreate command with the appropriate options and arguments to create a new
logical volume.

Commonly used options to the lvcreate command are:

-s y|n specifies strict (y) or non-strict (n) allocation


-C y|n specifies contiguous (y) or non-contiguous (n) allocation
-L size specifies size in MB
-l size specifies size in number of extents
-n name specifies non-standard name for new logical volume

The command to create a 50 MB logical volume in the volume group vg01 with the next
available standard name and using “strict” and “non-contiguous” allocation policy would
look like

lvcreate –L 50 /dev/vg01

The command to create a 50 MB logical volume named mylvol in the volume group vg01
using “strict” and “contiguous” allocation policy would look like

lvcreate –L 50 –s y –C y –n mylvol /dev/vg01

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

CREATING A NEW FILE SYSTEM

Use the newfs command to create a file system in the new logical volume. The commonly
used option to the newfs command is:

-F type specifies file system type where type is hfs for HFS file system
or vxfs for a JFS file system

The command to create a JFS file system in the logical volume named mylvol in vg01
would be

newfs –F vxfs /dev/vg01/rmylvol

MOUNTING THE NEW FILE SYSTEM

Use the mkdir command to do make the mount point directory for the new file system.

mkdir /appl

To do a manual, one-time mount of the file system, use the mount command.

The command to do a manual mount of the file system in mylvol in vg01 to the /appl
mount point directory would be

mount /dev/vg01/lvol /appl

In order to automatically mount mylvol at bootup, use vi to add an entry in


/etc/fstab. In order to activate a new /etc/fstab entry without rebooting the
system, use the command

mount –a

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

Creating a Logical Volume and File System

1) Create a 100 MB logical volume on volume group vg01

# lvcreate -L 100 vg01

NEW LVM DISK ROOT DISK


PVRA /
VGRA dev etc

vg01 lvmtab

lvol1 group rlvol1

BLOCK CHARACTER
BAD BLOCK POOL DEVICE FILE DEVICE FILE

Create a 100-Megabyte logical volume on the new disk using the default logical volume
name and allocation policy.

# lvcreate -L 100 vg01

• Adds Logical Volume information to PVRA and VGRA

• Creates device files /dev/vg01/lvol1 and /dev/vg01/rlvol1

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

Creating a Logical Volume and File System

2) Create a HFS file system for logical volume 1


# newfs -F hfs /dev/vg01/rlvol1

NEW LVM DISK


PVRA
VGRA
Newfs uses logical volume one's character
lost+found device file to create a 100 MB filesystem
in the space allocated by lvcreate.

Newfs creates a 'lost+found' directory


in the new filesystem.

BAD BLOCK POOL

Create the HFS file system for the new logical volume.

# newfs –F hfs /dev/vg01/rlvol1

• newfs uses the following defaults:

File system size: disk size returned by DIOC_CAPACITY


(Minus swap/boot)
Block size: 8192
Fragment size: 1024
Tracks per cylinder: calculated as shown below
Sectors per track: calculated as shown below
RPM: 3600

Tracks per cylinder and sectors per track are calculated according to the size of the file
system, as follows:

File System Size Tracks/Cylinder Sectors/Track


====================================================
<= 500 MB 7 22
501 MB - 1 GB 12 28
> 1 GB 16 39

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

# mkdir / vg01lvol1

NEW LVM DISK ROOT DISK


PVRA /
VGRA dev etc

lost+found
vg01 lvmtab

lvol1 group rlvol1

BAD BLOCK POOL

Make a mount-point-directory on root (/) for the new logical volume.

# mkdir /vg01lvol1

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

Creating a Logical Volume and File System

4) Mount the new file system to the mount point directory

# mount /dev/vg01/lvol1 /vg01lvol1

NEW LVM DISK ROOT DISK


PVRA /
VGRA vg01lvol1 dev etc

lost+found
vg01 lvmtab

lvol1 group rlvol1

BAD BLOCK POOL

The command shown on the slide does a one-time mount operation.

Add a new line in /etc/fstab for the new logical volume if the file system is to be
automatically mounted at bootup.

The entry should look similar to the one below.

/dev/vg01/lvol1 /vg01lvol1 hfs defaults 0 1

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-10
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 6: Creating a Logical Volume

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


6.doc HSD Field Development
6-11
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 7: Extending a Logical Volume

Module 7

Extending a Logical
Volume

Date 13-Nov-00 CES2-DISTANCELVM


7.doc HSD Field Development
7-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 7: Extending a Logical Volume

Procedural Overview for


Extending a Logical Volume

• Examine current vg and lvol configuraton


• Extend logical volume
• Unmount file system (if necessary)
• Extend file system
• Remount file system (if unmounted earlier)

This procedure outlines the tasks associated with extending a logical volume and the file
system residing in the logical volume. Each of the items will be explained in more detail in
this module.

Date 13-Nov-00 CES2-DISTANCELVM


7.doc HSD Field Development
7-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 7: Extending a Logical Volume

Examining Current Configuration

• Current lvol characteristics


– lvdisplay -v …
• size and current extent allocation
• allocation policy
• Free extents in volume group
– vgdisplay -v …
• Free extents on a physical volume
– pvdisplay -v ...

Prior to attempting to extend an existing logical volume, there are a number of things you
must know about the logical volume that you are extending and the volume group in which it
resides. You need to obtain answers to the following questions.
• What is the current size of the logical volume?
• What is the logical volume’s allocation policy?
• Does the volume group have enough free extents accommodate the size extension?
• If the logical volume has a strict or contiguous allocation policy, are there enough free
extents in the appropriate area of the current physical volume?

Date 13-Nov-00 CES2-DISTANCELVM


7.doc HSD Field Development
7-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 7: Extending a Logical Volume

Extending Logical Volume and


Extending File System

• Extend the logical volume


– lvextend
• Unmount the file system (if necessary)
– umount
• Extend the file system
– extendfs (HFS and JFS) or fsadm (online JFS)
• Remount the file system (if unmounted)
– mount

EXTENDING THE LOGICAL VOLUME

The logical volume can be extended (increased in size) with the lvextend command. The
following command would increase the size of the logical volume named mylvol in volume
group vg01 to 100 MB.

lvextend –L 100 /dev/vg01/mylvol

In the example above, assuming that the logical volume is non-contiguous, the additional
extents allocated to the logical volume can come from any available physical volume in the
volume group. With the lvextend command you can specify the physical volume from
which to allocate the additional extents.

One common technique for creating a logical volume and specifying which physical volume
it resides on is to use the lvcreate command with a size of zero and then use the
lvextend command to specify the actual size and physical volume. This is shown in the
following example.

lvcreate –n mylvol /dev/vg01


lvextend –L 100 /dev/vg01/mylvol /dev/dsk/c2t4d0

Date 13-Nov-00 CES2-DISTANCELVM


7.doc HSD Field Development
7-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 7: Extending a Logical Volume

UNMOUNTING THE FILE SYSTEM

If the file systemtype is HFS or JFS (without the optional Online JFS product) it must be
extended with the extendfs command. This command requires that the file system be
unmounted. Use the command umount to unmount the file system.

umount /dev/vg01/mylvol

Customers that have a JFS file system with the optional Online JFS product can use the
fsadm command to extend the file system without unmounting it.

EXTENDING THE FILE SYSTEM

Use the command extendfs to extend an unmounted file system. The command shown
below would extend the file system in the newly extended logical volume mylvol in vg01.

extendfs /dev/vg01/rmylvol

A customer with a JFS filesystem with the optional Online JFS product can extend a file
system without first unmounting it with the fsadm command.

REMOUNTING THE FILE SYSTEM

To remount the file system after it has been extended, use the mount command. If the file
system has an entry in /etc/fstab, it can be mounted by typing the command below.

mount –a

If not, the entire mount command must be typed as shown in the following example.

mount /dev/vg01/mylvol /appl

Date 13-Nov-00 CES2-DISTANCELVM


7.doc HSD Field Development
7-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 7: Extending a Logical Volume

LVEXTENDING

lvextend -L new_total /dev/vgXX/lvolX

lvol4
lvol4
lvol5
lvol4

Increases the space


allocated to a LV

INCREASING THE SIZE OF A LOGICAL VOLUME

You can increase the size of a logical volume by using the lvextend(1M) command. You
can also specify that you want the increased disk space (either in terms of extents or
megabytes) allocated to a specific disk (physical volume), or you can let LVM determine
where to allocate it.

Example:

Suppose you want to increase a logical volume, /dev/vg01/lvol4, which is currently


100 megabytes, to 200 megabytes. (If it contains a file system, make sure the file system is
unmounted. An example of extending a logical volume and extending a file system in it
follows.)

# lvextend -L 200 /dev/vg01/lvol4

Note that you specify the new size rather than the amount of increase.

When LVM implements the new size, it will use whole extent sizes.

For example, if the extent size was four megabytes and you specified -L 198, LVM will
still extend the logical volume to 200 megabytes.

You can also increase the size of a logical volume by increasing the number of its logical
extents.
Date 13-Nov-00 CES2-DISTANCELVM
7.doc HSD Field Development
7-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 7: Extending a Logical Volume

Using the previous example where we wanted to extend our logical volume to 200 megabytes
we would now enter the number of logical extents required:

# lvextend -l 50 /dev/vg01/lvol4

Note that when specifying extents, you use the -l option (lowercase L) rather than -L,
which is for specifying size in megabytes.

EXTENDING A LOGICAL VOLUME TO A SPECIFIC DISK.

Suppose you have several disks in a volume group and two of them are identical models.
You want to extend a 275 megabyte logical volume that resides on one of the disks to 400
megabytes. Also, you want to make sure the increase is allocated to the other identical disk.

To extend a logical volume (e.g., /dev/vg01/lvol4) to a specific disk (e.g.,


/dev/dsk/c0t3d0), explicitly specify the disk’s block device file.

# lvextend -L 400 /dev/vg01/lvol4 /dev/dsk/c0t3d0

EXTENDING THE FILE SYSTEM

Use the extendfs(1M) command to increase the file system capacity in proportion to the
increase in the logical volume. extendfs reads the current superblock to find out the
current characteristics of the filesystem. It then uses this information to create the additional
cylinder groups required to fill the logical volume. Once this has completed it updates the
superblock with the new information. Note that extendfs(1M) requires the use of the
character device file.

# extendfs /dev/vg01/rlvol4

The results of file system extension are reported.

Mount the file system again; specify the mount point, /projects, for example.

# mount /dev/vg01/lvol4 /projects

Run bdf and note the increased capacity.

NOTE The filesystem must be unmounted before extendfs is run

Date 13-Nov-00 CES2-DISTANCELVM


7.doc HSD Field Development
7-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 7: Extending a Logical Volume

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


7.doc HSD Field Development
7-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 8: Reducing a Logical Volume

Module 8

Reducing a Logical
Volume

Date 13-Nov-00 CES2-DISTANCELVM


8.doc HSD Field Development
8-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 8: Reducing a Logical Volume

Procedural Overview for


Reducing a Logical Volume

• Backup data in logical volume


• Examine current vg and lvol configuration
• Reduce file system size (if possible)
• Reduce logical volume size
• Recreate file system (if not reduced earlier)
• Restore data (if file system was recreated)

In this module, the tasks shown here for reducing the size of a logical volume will be covered
in much more detail.

Date 13-Nov-00 CES2-DISTANCELVM


8.doc HSD Field Development
8-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 8: Reducing a Logical Volume

Issues with Reducing Logical


Volumes

• Consider impact on file system and the data


– Can file system be reduced (without data loss)?
– Will file system have to be recreated?
– Will new file system size accommodate all
data?
• Backup data first!

Reducing the size of a logical volume can have destructive consequences for the data residing
in the logical volume. You must consider the consequences to the file system and data prior
to reducing the logical volume size.

Make sure that the data is backed up before you do anything else!

If the file system is a JFS file system and the customer has the optional Online JFS product,
the file system size can be reduced online with the fsadm command without losing data. It
is important to understand that the reduced file system size must still be large enough to hold
the current data.

If the customer doesn’t have the optional Online JFS product or the file system type is HFS,
the file system size cannot be reduced and must be recreated. This of course means that all
data will be lost and will have to be restored once the file system is recreated. Once again, it
is important to understand that the recreated file system size must still be large enough to
hold the data to that is to be restored.

Date 13-Nov-00 CES2-DISTANCELVM


8.doc HSD Field Development
8-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 8: Reducing a Logical Volume

Examining Current Configuration

• Current lvol size


– lvdisplay -v …
• Current file system type and size
– fstyp …
– bdf
• Current file system free space
– bdf

Prior to attempting to reduce the size of a logical volume containing a file system, there are a
number of questions that need to be answered.

• How large is the current logical volume and file system?


• How full is the current file system?
• Is the file system HFS or JFS?
• If the file system is JFS, does the customer have the optional Online JFS product?

Answers to these questions will help determine the exact procedure needed to reduce the size
of the file system and the size of the logical volume.

The lvdisplay command can be used to determine the current logical volume size. The
command to look at mylvol in vg01 would be

lvdisplay –v /dev/vg01/mylvol

You can use the fstyp command to determine the file system type in the logical volume.
JFS file systems are reported as vxfs by the command. The command to look at the logical
volume mylvol in vg01 would be

fstyp /dev/vg01/mylvol

Date 13-Nov-00 CES2-DISTANCELVM


8.doc HSD Field Development
8-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 8: Reducing a Logical Volume

You can determine the file system size and amount of used or free space with the bdf
command. Typical use of the bdf command would be

bdf

You can determine if the customer has the optional Online JFS product installed by using the
swlist command. Typical use of the swlist command would be

swlist

Date 13-Nov-00 CES2-DISTANCELVM


8.doc HSD Field Development
8-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 8: Reducing a Logical Volume

Reducing File System and


Reducing Logical Volume (1 of 2)

• Backup data
– fbackup
• Unmount the file system (if necessary)
– umount
• Reduce file system size (if possible)
– fsadm

Before doing anything, make sure the file system is backed up using a backup utility. In the
HP-UX world, one commonly used utility for backups is fbackup.

If the file system type is HFS or if it is JFS and swlist doesn’t indicate that the customer
has the optional Online JFS product, then the file system must be unmounted with the
umount command. The command to unmount the file system in mylvol in vg01 from
the mount point directory /myapp would be

umount /myapp

If the file system type is JFS (reported as vxfs by fstyp and the swlist command
shows that the customer has the optional Online JFS product, the file system can be reduced
online without first unmounting the file system. The command fsadm would be used to
reduce the file system size in this case.

Date 13-Nov-00 CES2-DISTANCELVM


8.doc HSD Field Development
8-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 8: Reducing a Logical Volume

Reducing File System and


Reducing Logical Volume (2 of 2)

• Reduce logical volume size


– lvreduce
• Recreate file system (if not reduced earlier)
– newfs
• Remount file system (if unmounted earlier)
– mount
• Restore files (if file system was recreated)
– frecover

Once the file system is reduced or identified as having to be recreated, you can reduce the
size of the logical volume with the lvreduce command. The command to reduce the size
of mylvol in vg01 to 50 MB would be

lvreduce -L 50 /dev/vg01/mylvol

If the file system size was not reduced with the fsadm command prior to reducing the
logical volume size with the lvreduce command, it is now necessary to recreate the file
system with the newfs command. The command to recreate a JFS file system in mylvol
in vg01 would be

newfs –F vxfs /dev/vg01/rmylvol

Once both the file system size and logical volume size are reduced, the file system can be
mounted with the mount command. Typically with an automatic mounted file system, you
can use the command

mount –a

If the file system had to be recreated, it is now empty. Use the appropriate utility, such as
frecover, to restore the data from the backup.

Date 13-Nov-00 CES2-DISTANCELVM


8.doc HSD Field Development
8-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 8: Reducing a Logical Volume

LVREDUCEing

lvol4
lvol4
lvol5
lvol4
Decreases the number
of physical extents
allocated to a LV

REDUCING THE SIZE OF A LOGICAL VOLUME

You can reduce the size of a logical volume by using the lvreduce(1M) command.

Reducing the size of a logical volume is appropriate when you want to use the logical volume
for another purpose that requires less space.

CAUTION:

• When you reduce the size of a logical volume, you might lose data as LVM deallocates
disk space.

• Reduce the size of a logical volume only if you have safely backed up the contents to tape
or to another logical volume, or if you no longer need its current contents.

• You cannot “reduce” the size of a file system unless you have the optional Online JFS
product: use newfs

Date 13-Nov-00 CES2-DISTANCELVM


8.doc HSD Field Development
8-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 8: Reducing a Logical Volume

Example:

Reducing the Size of a Logical Volume

Suppose you have a logical volume of 80 megabytes. You no longer need the current data in
the logical volume, but you would like to use it for another purpose that requires only 40
megabytes.

# lvreduce -L 40 /dev/vg03/lvol4

When you issue the command, LVM asks for confirmation. If you answer y to proceed, you
get a confirming message:

Logical volume “/dev/vg03/lvol4” has been successfully reduced.

Date 13-Nov-00 CES2-DISTANCELVM


8.doc HSD Field Development
8-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 8: Reducing a Logical Volume

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


8.doc HSD Field Development
8-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 9: Removing a Logical Volume

Module 9

Removing a Logical
Volume

Date 13-Nov-00 CES2-DISTANCELVM


9.doc HSD Field Development
9-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 9: Removing a Logical Volume

Removing a Logical Volume

• Relocate data (if necessary)


• Unmount file system
• Remove any entry from /etc/fstab (if
necessary)
• Remove logical volume
– lvremove

Prior to removing a logical volume you need to know what, if anything, needs to be done
with the data. It might be moved with the mv or cp commands to another location or if the
data is not need any more it might be okay to just have it disappear with the removed logical
volume.

Use the umount command to unmount the file system in the logical volume. To unmount
the file system in mylvol in vg01 which is mounted to the directory /myapp, you would
type

umount /myapp

If there is an entry for the file system in /etc/fstab, use vi to remove the entry.

vi /etc/fstab

The logical volume can now be safely removed with the lvremove command. The
command to remove the logical volume named mylvol in vg01 would be

lvremove /dev/vg01/mylvol

Date 13-Nov-00 CES2-DISTANCELVM


9.doc HSD Field Development
9-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 9: Removing a Logical Volume

LVREMOVEing

lvol5 Removes references to


the logical volume from
the system

REMOVING LOGICAL VOLUMES

You can remove a logical volume with the lvremove (1M) command. For example, to
remove an empty logical volume,

# lvremove /dev/vg01/lvol2

If the logical volume contains data, LVM prompts you for confirmation. You can use the -f
flag to remove the logical volume without a confirmation request.

Date 13-Nov-00 CES2-DISTANCELVM


9.doc HSD Field Development
9-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 9: Removing a Logical Volume

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


9.doc HSD Field Development
9-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG with LVs

Module 10

Creating a New VG with


LVs

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Procedural Overview for Creating


a New Volume Group with Lvols

• Examine current vg configuration


• Determine new vg characteristics
• Initialize new physical volumes
• Create vg directory and group file
• Create new volume goup
• Verify new vg configuration
• Create lvols, file systems, etc.

Each of the tasks shown on this overview slide will be covered in greater detail in the
remainder of the module.

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Examining Current Configuration

• Current volume group names


– strings /etc/lvmtab
• Current volume group minor numbers
– ll /dev/vg*
• Currently used physical volumes
– strings /etc/lvmtab

Before creating a new volume group, there are a number of things that need to be known
about the current configuration.

• What volume groups are currently configured?


• What group minor numbers are currently being used?
• What physical volumes are currently being used?

These questions can easily be answered with two commands.

strings /etc/lvmtab
ll /dev/vg*

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Determining new Volume Group


Characteristics

• Volume Group name


• Volume Group minor number
• Free physical volumes to include
• Non-default Volume Group characteristics
to use

In order to create a new volume group, you need to have decided on a number of
configurable items.

• What volume group name to use? The volume group name must be unique and often has
the format vgXX where XX represents a numerical sequence. The root volume group
begins the sequence with vg00.
• What group file minor number to use? The minor number must be unique and has the
format 0xXX0000 where XX represents a numerical sequence. The root volume group
begins the sequence with 0x000000.
• What physical volume(s) to include in the volume group? The physical volumes must be
currently unused by LVM volume groups or by any non-LVM based file system or
application.
• What non-default values to use for the volume group? Certain items, like extent sizes,
must be determined at the time that the volume group is created.

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Creating new Volume Group

• Initialize new physical volume(s)


– pvcreate …
• Create volume group directory in /dev
– mkdir
• Create group file for new volume group
– mknod
• Create new volume group
– vgcreate

Initialize the new physical volume(s) to be used in the volume group.

pvcreate /dev/rdsk/cXtYdZ
pvcreate –f /dev/rdsk/cXtYdZ (if the disk already contains LVM structures)

Create the a directory under /dev for the new volume group.

mkdir /dev/vgXX

Create a group file for the new volume group.

mknod /dev/vgXX/group c 64 0xXX0000

Create the new volume group.

vgcreate /dev/vgXX /dev/dsk/cXtYdZ

Add additional disks to the volume group

vgextend /dev/vgXX /dev/dsk/cXtYdZ

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Verifying new Volume Group


Configuration

• Verifying new volume group configuration


– vgdisplay -v ...
• Verifying that volume group configuration
changes are backed up
– note vgcfgbackup output from each LVM
command

In the world of LVM, once you have created something, it is always a good idea to take a
look at what you have created to confirm that you have produced the expected results.

strings /etc/lvmtab
vgdisplay –v /dev/vgXX

Are all the desired physical volumes present in the volume group?

Do you have the desired extent size?

Last, but not least, note that (by default) all volume group changes result in the volume group
configuration being backed up by vgcfgbackup. It is important to confirm that this
backup is occurring and that the backup remain on the system so that it can be used for
recovery operations.

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Creating Lvols, File Systems, etc.


in new Volume Group

• Create logical volumes in new VG


– lvcreate
• Create file system in new lvol
– newfs
• Create mount point directory for new fs
– mkdir
• Mount new file system

Once you have created the volume group, you are ready to create logical volumes, file
systems, mount point directories and mount the file systems. These items were covered in an
earlier module, but an example has been included below.

Create a new logical volume.

lvcreate –L 50 –s y –C n –n mylvol /dev/vgXX

Verify the logical volume characteristics.

lvdisplay -v /dev/vgXX/mylvol | more

Create a file system in the logical volume.

newfs –F vxfs /dev/vgXX/rmylvol

Create a mount point directory for the new file system.

mkdir /myapp

If the new file system is to be automatically mounted at bootup, make an entry in


/etc/fstab.

vi /etc/fstab

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Mount the new file system

mount –a

Verify the attributes of the new file system.

bdf
fstyp /dev/vgXX/mylvol

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Creating an HP-UX LVM File


System on a new Disk
1) create physical volume for use in a volume group

#pvcreate /dev/rdsk/c2t4d0

NEW LVM DISK


PVRA Physical Volume Reserved Area Contents
* ID’s for volume group and physical volumes
* Physical extent size
* Physical volume size
* Bad block directory
* Size and location of the other disk structures

Transparent software sparing is not supported on HPIB or the


BAD BLOCK POOL root disk (only used if disk hardware sparing fails)

# pvcreate –f /dev/rdsk/cxtydz

WARNING This will destroy any data on the disk

• Creates the Physical Volume Reserved Area (PVRA) and the Bad Block Pool
• The PVRA will now contain:
– Unique ID number for the Physical Volume.
– Pointers to the Bad Block Pool
• Use the -f option to force pvcreate to create a new PV on an already existing PV

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Creating an HP-UX LVM File


System on a new Disk
2) create a directory on “/dev” on the volume group

#mdkir /dev/vg01

NEW LVM DISK ROOT FILE SYSTEM


PVRA
/

dev

vg01

BAD BLOCK POOL

If necessary, create a new directory under /dev for a new volume group to reside on the
disk.

# mkdir /dev/vg01

• The directory name usually begins with vg so it can be easily identified but name can
vary.

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Creating an HP-UX LVM File


System on a new Disk
3) make a group device file

#mknod /dev/vg01/group c 64 0x010000

NEW LVM DISK ROOT FILE SYSTEM


PVRA
/

dev

vg01

group
BAD BLOCK POOL

If a new volume group is to be created, make a group character device file for the new
volume group.

# mknod /dev/vg01/group c 64 0xZZ0000

Where:
0x indicates what follows is a hex number
Key Operation
ZZ HEXADECIMAL group number
0000 will always be 0000 for the group file

• This file is used by LVM to access the Volume Group structures and is spelled just as you
see it here. Do not remove the group file.

NOTE You can not use the insf or mkfs commands to create the group file

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-10
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 10: Creating a New VG With LVs

Creating an HP-UX LVM File


System on a new Disk
4) create a volume group

#vgcreate /dev/vg01 /dev/dsk/c0t6d0

NEW LVM DISK Volume Group Reserved Area Contents


PVRA *Volume group descriptor Area (VGDA)
Identifies logical and physical volumes
Physical to logical extent mapping
*Volume Group Status Area (VGSA)
Physical Volume status (missing/present)
Physical extent status (stale/ok)
*Mirror Consistency Record (MCR)
Lists disk writes in progress

‘vgcreate’ creates or updates ‘/etc/ lvmtab’


BAD BLOCK POOL Adds volume group information to ‘/etc/lvmtab’

Create the volume group on the new disk.

# vgcreate /dev/vg01 /dev/dsk/c0t6d0

Additional disks may be specified on the command line.

vgcreate creates the Volume Group Reserved Area (VGRA).

This contains:
• Volume Group Descriptor Area (VGDA)
– Identifies logical and physical volumes
– Logical to Physical Extent mapping

• Volume Group Status Area (VGSA)


– Physical Volume status (missing/present)
– Physical Extent status (stale/current)

Date 13-Nov-00 CES2-DISTANCELVM


10.doc HSD Field Development
10-11
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 11: Extending a VG

Module 11

Extending a VG

Date 13-Nov-00 CES2-DISTANCELVM


11.doc HSD Field Development
11-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 11: Extending a VG

Procedural Overview Extending a


Volume Group

• Examine current volume group


configuration
• Identify free physical volume(s)
• Extend volume group to include new
physical volume(s)
• Verify new volume group configuration

This procedure outlines the tasks associated with extending a volume group. Each task will
be covered in more detail later.

Date 13-Nov-00 CES2-DISTANCELVM


11.doc HSD Field Development
11-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 11: Extending a VG

Examining Current Configuration

• Current physical volumes in use


– strings /etc/lvmtab
• Current physical volumes on system
– ioscan -fnC disk

In order to know what physical volumes are available to be included into the volume group,
you must examine the system’s current configuration.

Use the strings command to find out what physical volumes are currently in use.

strings /etc/lvmtab

In order to determine what physical volumes are present on the system, use the ioscan
command.

ioscan –fnC disk

Output from these two commands will allow you to determine what free physical volumes are
present that can be added to the volume group.

Date 13-Nov-00 CES2-DISTANCELVM


11.doc HSD Field Development
11-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 11: Extending a VG

Extending Volume Group

• Initialize new physical volumes


– pvcreate ...
• Extend volume group to new physical
volumes
– vgextend ...
• Verify new volume group configuration
– vgdisplay -v ...

Once you have determined which drives can be added to the volume group, use the
pvcreate command to initialize the physical volumes.

pvcreate /dev/rdsk/cXtYdZ
pvcreate /dev/rdsk/cXtYdZ
(if the drive already has LVM structures from previous use)

Now you can extend the volume group to the new physical volume(s) with the vgextend
command.

vgextend /dev/vgXX /dev/dsk/cXtYdZ

As always, you should use the appropriate command(s) to confirm that things look as
expected.

vgdisplay –v /dev/vgXX
strings /etc/lvmtab

Date 13-Nov-00 CES2-DISTANCELVM


11.doc HSD Field Development
11-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 11: Extending a VG

VGEXTENDING

/dev/dsk/c0t4d0 /dev/dsk/c0t5d0

vg 01

Suppose you have an existing volume group, /dev/vg01, and you want to add two disks to
it. You have installed the disks, Instance 6 and 5.

LVEXTEND EXAMPLE

1. Make the disks LVM disks (physical volumes); remember to use the disk’s character
device file.

# pvcreate /dev/rdsk/c0t4d0
# pvcreate /dev/rdsk/c0t5d0

2. Add the LVM disks to the volume group /dev/vg01 (remember to use the disk’s block
device file).

# vgextend /dev/vg01 /dev/dsk/c0t4d0 /dev/dsk/c0t5d0

3. To verify that the volume group now contains the disks, use the command:

# vgdisplay /dev/vg01

You can see the disks under the heading:

--- Physical Volumes ---.

Date 13-Nov-00 CES2-DISTANCELVM


11.doc HSD Field Development
11-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 11: Extending a VG

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


11.doc HSD Field Development
11-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 12: Extending an LV to Another PV

Module 12

Extending an LV to
another PV

Date 13-Nov-00 CES2-DISTANCELVM


12.doc HSD Field Development
12-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 12: Extending an LV to Another PV

Procedural Overview for lvol


Extent Allocation Control

• Examine current vg configuration and lvol


configuration (including extent allocation)
• Extend lvol to desired physical volume(s)
• Verify lvol extension results
• Unmount file system (if necessary)
• Extend file system
• Remount file system (if necessary)

Once again, the procedure shown on this slide is discussed in more detail later in this module.

Date 13-Nov-00 CES2-DISTANCELVM


12.doc HSD Field Development
12-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 12: Extending an LV to Another PV

Examining Current Configuration

• Current lvols in volume group


– vgdislay -v ...
• Lvol allocation policy and current extent
allocation
– lvdisplay -v …
• Free extents on physical volumes
– pvdisplay -v ...

As we learned earlier, prior to attempting to extend an existing logical volume, there are a
number of things you must know about the logical volume that you are extending and the
volume group in which it resides. You need to obtain answers to the following questions.

• What is the current size of the logical volume?


• What is the logical volume’s allocation policy?
• Does the volume group have enough free extents to accommodate the size extension?
• If the logical volume has a strict or contiguous allocation policy, are there enough free
extents in the appropriate area of the physical volumes having the free space?

Date 13-Nov-00 CES2-DISTANCELVM


12.doc HSD Field Development
12-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 12: Extending an LV to Another PV

Controlling lvol Extent Allocation

• Create lvol with zero size


– lvcreate -L 0 …
• Extend lvol to desired size and location
– lvextend -L 100 / dev/vgXX/lvolX \
/dev/dsk/cXtYd0

If you examine the options for the lvcreate command, you will find that there is no way
to control the physical volume from which extents are allocated for a new logical volume. If
you wish to control extent allocation for a logical volume, it must be done with the
lvextend command.

The standard technique is to use the lvcreate command to create a logical volume with
“zero” size and then use the lvextend command to achieve the desired size and choose the
physical volume from which extents are allocated. The command sequence shown below
creates a new 100 MB logical volume named mylvol in vg01 and has those extents
allocated from a physical volume at c2t4d0.

lvcreate –L 0 –n mylvol /dev/vg01


lvextend –L 100 /dev/vg01/mylvol /dev/dsk/c2t4d0

Date 13-Nov-00 CES2-DISTANCELVM


12.doc HSD Field Development
12-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 12: Extending an LV to Another PV

Extending lvol to Increase Size


and Control Placement of Extents

• Extend lvol to increase size and specify


location of additional extents
– lvextend -L 200 /dev/vgXX/lvolX \
/dev/dsk/cXtYd0

The lvextend command can be used to increase the size of an existing logical volume and
control from where the additional extents are allocated. Remember that the allocation policy
for the logical volume ultimately determines what free extents can and cannot be used.

In the example below, we are extending the logical volume created earlier to 200 MB and we
are using the additional extents from a physical volume at c2t5d0.

lvextend –L 200 /dev/vg01/mylvol /dev/dsk/c2t5d0

Date 13-Nov-00 CES2-DISTANCELVM


12.doc HSD Field Development
12-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 12: Extending an LV to Another PV

Verifying lvol Extension Results

• To look at lvol size extension and allocation


of extents
– lvdisplay -v ...

As always, use the appropriate display command to confirm the results of your LVM activity.
Here, you want to use the lvdisplay command with the –v option to look at the extent
mapping for the logical volume.

lvdisplay –v /dev/vg01/mylvol

Date 13-Nov-00 CES2-DISTANCELVM


12.doc HSD Field Development
12-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 12: Extending an LV to Another PV

Increasing File System to Utilize


Increased lvol Size

• Unmount file system (if necessary)


– umount …
• Extend file system
– fsadm … (JFS file system with Online
JFS product)
– extendfs … (all others)
• Remount file system (if unmounted earlier)
– mount ...

As mentioned earlier, once you have increased the logical volume size you need to make the
increased size available for users to store data. The procedure below outlines the procedure
for extending a HFS file system in the newly increased logical volume.

Umount the HFS file system with the umount command.

umount /mount_point_directory

Extend the file system with the extendfs command.

extendfs /dev/vgXX/lvolX

Remount the newly extended file system.

mount /mount_point_directory (assumes an entry in /etc/fstab)

Remember, the procedure is different if the file system type is vxfs and if the customer has
the optional Online JFS product. The two commands shown below will tell you if this is the
case.

fstyp /dev/vgXX/lvolX
swlist

Date 13-Nov-00 CES2-DISTANCELVM


12.doc HSD Field Development
12-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 12: Extending an LV to Another PV

EXTENDING A LOGICAL VOLUME TO A SPECIFIC DISK.

Suppose you have several disks in a volume group and two of them are identical models.
You want to extend a 275 megabyte logical volume that resides on one of the disks to 400
megabytes. Also, you want to make sure the increase is allocated to the other identical disk.

To extend a logical volume (e.g., /dev/vg01/lvol4) to a specific disk (e.g.,


/dev/dsk/c0t3d0), explicitly specify the disk’s block device file.

# lvextend -L 400 /dev/vg01/lvol4 /dev/dsk/c0t3d0

EXTENDING THE FILE SYSTEM

Use the extendfs(1M) command to increase the file system capacity in proportion to the
increase in the logical volume. extendfs reads the current superblock to find out the
current characteristics of the filesystem. It then uses this information to create the additional
cylinder groups

required to fill the logical volume. Once this has completed it updates the superblock with the
new information. Note that extendfs(1M) requires the use of the character device file.

# extendfs /dev/vg01/rlvol4

The results of file system extension are reported.

Mount the file system again; specify the mount point, /projects, for example.

# mount /dev/vg01/lvol4 /projects

Run bdf and note the increased capacity.

NOTE NOTE: The filesystem must be unmounted before extendfs is run

Date 13-Nov-00 CES2-DISTANCELVM


12.doc HSD Field Development
12-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 13: Reducing a VG

Module 13

Reducing a VG

Date 13-Nov-00 CES2-DISTANCELVM


13.doc HSD Field Development
13-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 13: Reducing a VG

Procedural Overview for Reducing


a Volume Group

• Examine current vg and lvol configuration


• Identify physical volume(s) to be removed
• Manipulate lvols to remove extents from
that physical volume(s)
• Remove physical volumes(s)

The procedure shown here for reducing physical volumes from a volume group will be
covered in this module.

Date 13-Nov-00 CES2-DISTANCELVM


13.doc HSD Field Development
13-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 13: Reducing a VG

Examining Current Config and


Selecting PVs to be Removed

• Current physical volumes in VG


– strings /etc/lvmtab
• Extents allocated to each physical volume
– pvdisplay -v …
• Current lvol configuration
– lvdisplay -v ...

Prior to reducing physical volumes from a volume group, you need to have detailed
information about the volume group.

• What physical volumes are the volume group?


• What logical volumes are in the volume group?
• What is the logical volume extent allocation for each physical volume?

Once you have this information, you should be able to identify which physical volume(s) you
want to remove from the volume group and identify the logical volume(s) (if any) that will be
impacted by the removal.

Date 13-Nov-00 CES2-DISTANCELVM


13.doc HSD Field Development
13-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 13: Reducing a VG

Manipulating lvols to Free up


Extents on Physical Volumes

• Can lvol(s) be removed?


– lvremove …
• Can lvol(s) be reduced?
– lvreduce …
• Can lvol(s) be moved?
– pvmove ...

If any logical volume(s) has extents allocated from a physical volume that is to be reduced
from the volume group, you must take action to “unallocate” those extents.

Can the logical volume(s) be removed? If the data is no longer important and can be
removed, the file system can be unmounted and the logical volume can be removed. The
following command sequence example demonstrates the procedure.

umount /mountpoint (use appropriate mountpoint directory)


vi /etc/fstab (remove any reference to file system to be
removed)
lvremove /dev/vgXX/lvolX (remove the appropriate logical volume)

Can the logical volume(s) be reduced? If the data does not fully utilize the file system and
logical volume space, you might be able to reduce the logical volume and free the extents on
the physical volume(s) to be removed. The following command sequence example
demonstrates the procedure for a HFS file system.

fbackup … (backup data)


umount /mountpoint (use appropriate mountpoint directory)
newfs –F hfs /dev/vgXX/lvolX (recreate the file system)
lvreduce -L XXX /dev/vgXX/lvolX (reduce the appropriate lvol to appropriate
size)
mount /mountpoint (remount file system)
frecover ... (restore data from backup)
Date 13-Nov-00 CES2-DISTANCELVM
13.doc HSD Field Development
13-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 13: Reducing a VG

Can the logical volume be moved? In some cases, the logical volume can be moved to a
different physical volume with the pvmove command. The partial syntax for the pvmove
command is:

pvmove [-n lv_path] source_pv_path


[dest_pv_path ... | dest_pvg_name ...]

An example of using pvmove to move all of remaining extents from one physical volume
(c2t4d0) to another physical volume (c2t5d0), assuming that they will all fit, would be:

pvmove /dev/dsk/c2t4d0 /dev/dsk/c2t5d0

Date 13-Nov-00 CES2-DISTANCELVM


13.doc HSD Field Development
13-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 13: Reducing a VG

Removing Physical Volumes

• Verify all extents are free


– pvdisplay -v ….
• Remove PV(s) from volume group
– vgreduce …

Once all extents are freed on the physical volume(s) to be removed, use the command
pvdisplay to verify that all extents are freed.

pvdisplay –v /dev/dsk/cXtYdZ (use appropriate device file)

Now you can remove the physical volume from the volume group with the vgreduce
command.

vgreduce /dev/vgXX /dev/dsk/cXtYdZ (use appropriate vg and device file)

Date 13-Nov-00 CES2-DISTANCELVM


13.doc HSD Field Development
13-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 13: Reducing a VG

VGREDUCE

You can also remove disks from a volume group using the vgreduce(1M) command, but
you must first move or remove any logical volumes on the physical volume.

CAUTION Whenever you add or remove disks from the root volume group (the volume
group that contains /), you must run the lvlnboot(1M) command to update
the boot data stored on the boot disks in the volume group if it has been set to
not run automatically.

Date 13-Nov-00 CES2-DISTANCELVM


13.doc HSD Field Development
13-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 13: Reducing a VG

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


13.doc HSD Field Development
13-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 14: Removing a VG

Module 14

Removing a VG

Date 13-Nov-00 CES2-DISTANCELVM


14.doc HSD Field Development
14-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 14: Removing a VG

Procedural Overview for


Removing a Volume Group

• Relocate data (if necessary)


• Unmount file systems
• Remove /etc/fstab entries
• Remove all lvols in volume group
• Reduce VG to one physical volume
• Remove volume group
• Verify VG removal

These tasks associated with removing a volume group will be covered in this module.

Date 13-Nov-00 CES2-DISTANCELVM


14.doc HSD Field Development
14-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 14: Removing a VG

Dealing with Existing Data

• Relocate data (if necessary)


• Unmount file systems
– umount
• Remove /etc/fstab entries (if necessary)
– vi /etc/fstab
• Remove all lvols in volume group
– lvremove

Prior to removing a volume group, you must deal with existing data first. If the data is to
remain available on the system, it must be moved to locations outside of the volume group
before proceeding.

When you are ready to remove the volume group, start by using the umount command to
unmount the file system(s) in logical volumes in the volume group. Next, use the vi
command to remove any entries for those file system in /etc/fstab.

Now you can use the lvremove command to remove all of the logical volumes from the
volume group. Once you think that all logical volumes are gone, use the vgdisplay
command to confirm that all logical volumes have been removed.

Date 13-Nov-00 CES2-DISTANCELVM


14.doc HSD Field Development
14-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 14: Removing a VG

Reducing and removing the


Volume Group

• Reduce the volume group to one physical


volume
– vgreduce …
• Remove the volume group
– vgremove …
• Verify volume group removal
– strings /etc/fstab

You can now use the vgreduce command to reduce physical drives from the volume group.
vgreduce the volume group until you have only one physical volume left. Once again, you
can use either the vgdisplay command or the strings command to confirm that the
volume group only contains one physical volume.

The vgremove command can be used to remove the one disk volume group. Once you
have removed the volume group, you can verify with the strings command that the
volume group is now gone.

Date 13-Nov-00 CES2-DISTANCELVM


14.doc HSD Field Development
14-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 14: Removing a VG

VGREMOVE

If the volume group is to be removed, use the vgremove command. Vgremove will only
remove a volume group after it has been vgreduce’d down to one physical volume. This
will be seen if you do a vgdisplay and check these two lines:

.
.

Cur PV 1
Act PV 1
.
.

If the values are different from each other then you will not be able to remove the volume
group. If the Cur PV value is higher but the volume group only shows that it has one physical
volume in both /etc/lvmtab and a vgdisplay -v listing then you will never be able
to vgremove it until these missing physical volumes are vgcfgrestore’d. If you can’t
do this then the only option is to do a vgexport of the volume group - this is highly
effective!

Date 13-Nov-00 CES2-DISTANCELVM


14.doc HSD Field Development
14-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 14: Removing a VG

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


14.doc HSD Field Development
14-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

Module 15

Lab

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

SETUP FOR DAY 2 LABS

Day 2 labs require an HP9000 computer with HP-UX 10.20 or HP-UX 11.0 installed. The
only configured volume group should vg00 . This root volume group should only have one
physical volume, should include only standard logical volumes, and should have at least 50
MB of contiguous, unallocated extents. There should be two unused physical volumes.

TASK 1: CREATING A LOGICAL VOLUME IN THE ROOT VOLUME GROUP

The purpose of this task is to become familiar with creating a new logical volume in an
existing volume group, creating a file system on the new logical volume, and mounting the
new file system. Appropriate techniques will be used to control the logical volume name,
size and extent allocation and the file system type and mounting process.

1. Using vgdisplay, determine the amount of free space in the root volume group.
Available free space can be calculated by multiplying FREE PE and PE SIZE
(MBYTES). At no time during the remainder of this exercise can logical volume(s)
be created that exceed this amount of free space.

# vgdisplay /dev/vg00 | more

Available free space =

2. Using vgdisplay, note the names of the logical volumes currently defined in the
root volume group. When you create new logical volumes with default names later
in this exercise, this list should allow you to be able to determine what default logical
volume name will be used.

# vgdisplay -v /dev/vg00 | more

3. Use the lvcreate command as shown on the command line below to create a 12
MB logical volume in the root volume group with a default logical volume name.

# lvcreate –L 12 /dev/vg00

Note the logical volume name and character device for the new logical volume:

Logical volume name: _____________________

Character device: _________________________

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

4. Use the lvdisplay command as shown on the command line below, examine the
characteristics of the logical volume that you have created. Use the output from this
command to answer the questions that follow.

# lvdisplay /dev/vg00/lvolX (where X matches lvol name


created in step above)

Does the logical volume use strict extent allocation?

Does the logical volume use contiguous extent allocation?

Does the size of the logical volume match what you expected?

5. Use the lvcreate command as shown on the command line below to create a 10
MB logical volume in the root volume group with the name contiglvol

# lvcreate –L 10 –C y –n contiglvol /dev/vg00

Note the logical volume name and character device for the new logical volume:

Logical volume name: _____________________

Character device: _________________________

6. Use the lvdisplay command as shown on the command line below, examine the
characteristics of the logical volume that you have created. Use the output from this
command to answer the questions that follow.

# lvdisplay /dev/vg00/contiglvol

Does the logical volume use strict extent allocation?

Does the logical volume use contiguous extent allocation?

Does the size of the logical volume match what you expected?

7. Now that you have created two new logical volumes, use the newfs command as
shown below to create a file system in each of the logical volumes. Create a HFS file
system in the logical volume named lvolX (where X is the lvol number created
earlier in this exercise) and a JFS file system in the logical volume name
contiglvol that was created earlier.

# newfs –F hfs /dev/vg00/rlvolX (where X is the lvolnumber


created earlier)

# newfs –F vxfs /dev/vg00/rcontiglvol

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

8. Use the mkdir command to create two mount-point directories under the root
directory ( / ) for the two file systems. Use the name hfs_lvol for the HFS file
system and the name jfs_lvol for the JFS file system.

# mkdir /hfs_lvol
# mkdir /jfs_lvol

9. Use the cp command to make a backup copy of /etc/fstab, then use the
command vi to add the entry shown below to automatically mount the JFS file
system only. Do not make an entry for the HFS file system.

# cp /etc/fstab /etc/fstab.orig
# vi /etc/fstab

The JFS entry should look like this:

/dev/vg00/contiglvol /jfs_lvol vxfs delaylog 0 2

10. Use the mount command to automatically mount the JFS file system added to
/etc/fstab. Then use the bdf and mount commands to confirm that the file
system is mounted.

# mount –a
# bdf
# mount

11. Use the mount command to do a one-time manual mount of the HFS file system.
Then use the bdf and mount commands to confirm that the file system is mounted.

# mount –F hfs /dev/vg00/lvolX /hfs_lvol


(where X is the lvol number created earlier)
# bdf
# mount

12. Prior to preparing these logical volumes for removal in a later exercise, let’s look at
what you have done.

You have created two 12 MB logical volumes. One of these was created with a
default name and the other with a specified name. One of these was created with
strict extent allocation only and the other with both strict and contiguous extent
allocation.

In each of these logical volumes you created a file system. One logical volume
contains a HFS file system and the other a JFS file system. One of these file systems
is automatically mounted at bootup and the other one has to be manually mounted
from the command prompt.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

13. Use the cp command to place a copy of the current /etc/fstab file in each of
these file systems. Use the file name /etc/fstab.copy. You will use this file
in a later exercise.

# cp /etc/fstab /hfs_lvol/fstab.copy
# cp/etc/fstab /jfs_lvol/fstab.copy

14. Use the umount command to unmount each of the two file systems.

# umount /dev/vg00/contiglvol
# umount /dev/vg00/lvolX
(where X is the lvol number created earlier)

15. Use the mount command to confirm that the two file systems are no longer mounted.

# mount

TASK 1 SUMMARY

Prior to creating new logical volumes in a volume group, it is always a good idea to use the
vgdisplay, pvdisplay, and lvdisplay commands to look at the current
configuration. These commands will show you existing logical volume names and the
number, size, and location of unallocated physical extents in the volume group.

Logical volumes are created with the lvcreate command. There are options to the
command that allow you to choose the name ( -n ), size (-L or –l ), and allocation policy (
-C and –s ) for the logical volume being created. Once a logical volume is created, it is
always a good idea to use the lvdisplay command to confirm that it was created with the
proper name, size, and allocation policy.

Once a logical volume is created, if it is to contain a file system for directories and files, the
file system must be created with the newfs command, a mount point directory created with
the mkdir command, and mounted with the mount command.

Two types of file systems can be created with the newfs command and the –F option. Use
the –F hfs option to create a standard “High Performance File System” or -F vxfs to
create a “Journaled File Syst

File systems may be manually one-time mounted with the mount command or may be
automatically mounted at boot time by creating an entry in the /etc/fstab file. New
entries in the /etc/fstab file can initially be activated with the command mount –a
rather than having to reboot the system.

File systems can be unmounted with the umount command. The mount command or the
bdf command can be used see whether a file system is mounted.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 2: RENAMING A LOGICAL VOLUME

The purpose of this task is to correlate the relationship between the logical volume minor
number and the LV Entry number, NOT the name assigned.

1. Use the mv command to rename the contiglvol special files in /dev/vg00 to


contig for the block special and rcontig for the character special.

# mv /dev/vg00/contiglvol /dev/vg00/contig
# mv /dev/vg00/rcontiglvol /dev/vg00/rcontig

2. Modify the /etc/fstab file to reflect this change.

# vi /etc/fstab

3. Run fsck on /dev/vg00/rcontig.

# fsck -F vxfs /dev/vg00/rcontig

Did it work?

4. Try to mount /jfs_lvol (/dev/vg00/contig).

# mount /jfs_lvol

Were you successful?

5. Perform a bdf.

# bdf

What is the special file for /jfs_lvol?

6. Verify that the file /jfs_lvol/fstab.copy still exists.

# ll /jfs_lvol/fstab.copy

7. Use the umount command to unmount the jfs_lvol file system. Then use vi to
remove the entry for the JFS file system that you modified earlier.

# umount /dev/vg00/contiglvol
# vi /etc/fstab

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 2 SUMMARY

LVM uses logical volume minor numbers and not logical volume names to track logical
volumes. This means that there is a simple four step process to rename a logical volume.

First, locate the block special and character special files for the logical volume to be renamed.
For a logical volume named myapplvol in a volume group named appvg the block special
file would be /dev/appvg/myapplvol and the character special file would be
/dev/appvg/rmyapplvol.

The second step is to make sure the logical volume is not in use. If it contains a file system,
use the umount command to unmount the file system.

The third step is to use the mv command to rename the two files identified earlier. Rename
the block special file as desired, then give the character special file the same name, except put
an r in front of the file name.

The final step is to change all references to the new logical volume name, such as entries in
/etc/fstab, to reflect the new name. At this point, the logical volume can be placed back
into use.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 3: EXTENDING A LOGICAL VOLUME

The purpose of this task is to become familiar with extending the size of an existing logical
volume and extending the size of the file system that currently resides on that logical volume.

1. Use the lvdisplay command to examine the size of the two logical volumes
created and modified in the earlier exercises. Remember that the names of the two
lvols should now be lvolX (where X matches lvol name created earlier) and
contig.

# lvdisplay /dev/vg00/contig
# lvdisplay /dev/vg00/lvolX (where X matches lvol name
created earlier)

Size of lvolX?

Size of contig?

2. Use the lvextend command to extend the size of contig to 24 MB and then
extend the size of lvolX to 24 MB. Please note that contig must be extended first
because when you created the logical volume in the earlier exercise, you used the –
C y option to require contiguous extent allocation.

# lvextend –L 24 /dev/vg00/contig
# lvextend –L 24 /dev/vg00/lvolX (where X matches lvol
name created earlier)

3. Use the lvdisplay command to verify the sizes of the newly extended logical
volumes.

# lvdisplay /dev/vg00/contig
# lvdisplay /dev/vg00/lvolX (where X matches lvol name
created earlier)

4. Note that at this point, although the logical volumes have increased in size, the file
systems contained in those logical volumes are still the original size. Use the
extendfs command to extend the file system to the end of each of the logical
volumes.

# extendfs –F hfs /dev/vg00/rlvolX (where X matches lvol


name created earlier)

# extendfs –F vxfs /dev/vg00/rcontig

Note that while the extendfs command requires the file systems to be unmounted
while they are extended, if the customer has the Online JFS product (also knows as

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

Advanced VxFS), JFS file systems can be extended while in use with the fsadm
command.

5. Use the fsck command to verify that both file systems are intact. The extension
operation in the prior step should not have been destructive to the file system or to any
data that was there.

# fsck -F vxfs /dev/vg00/rcontig


# fsck -F hfs /dev/vg00/rlvolX (where X matches lvol
name created earlier)

6. Use the mount command to mount each of the file systems to confirm that they are
accessible from the system. Verify that the copy of /etc/fstab.copy still exists
in each of the file systems.

# mount -F vxfs /dev/vg00/contig /jfs_lvol


# mount –F hfs /dev/vg00/lvolX /hfs_lvol (where X
matches lvol name created
earlier)

# ll /jfs_lvol/fstab.copy
# ll /hfs_lvol/fstab.copy

7. Use the umount command to unmount each of the two file systems. Then use vi to
remove the entry for contig from /etc/fstab.

# umount /dev/vg00/contiglvol
# umount /dev/vg00/lvolX (where X is the lvol number created
earlier)

8. When you created the new logical volume in vg00, the metadata information (lvm
configuration for vg00) was automatically saved in a file. This configuration file is
an important part of LVM restoration and recovery procedures that will be covered
elsewhere in this course. If there was a previous copy of the configuration, it was
copied first with an extension of “.old”. Record the full pathname to the configuration
file below.

Configuration backup file: ________________________________________

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 3 SUMMARY

The activities required to extend (make bigger) a logical volume depends on the allocation
policy for the logical volume and the nature of the existing data in the logical volume. Here
again, it is always a good idea to see what you are working with by using the bdf,
lvdisplay, vgdisplay, and pvdisplay commands prior to attempt to make any
changes.

As always, the customer should have a current, valid backup of the data before making any
logical volume changes.

First, you have to determine the allocation policy for the logical volume (strict vs non-strict
and contiguous vs non-contiguous) and then determine if there are physical extents available
with which to extend the logical volume and adhere to the allocation policy. Once you have
determined that the physical extents are available that meet the required allocation policy,
you use the lvextend command to increase the size of the logical volume. Use the
lvdisplay command to confirm your work.

If you are working with a logical volume containing a HFS file system, you have to use the
umount command to unmount the file system and then use the extendfs command to
extend the file system structures to accommodate the increased logical volume size. A JFS
file system requires the same activities unless the customer has the Online JFS product. Once
the file system is extended, you use the mount command to mount the newly extended file
system. At this point, the bdf command should indicate that you have successfully
increased the file system size.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 4: REDUCING A LOGICAL VOLUME

The purpose of this task is to become familiar with the procedure for reducing the size of a
logical volume and to understand the consequences to the file system and/or data residing in
the logical volume.

1. Never reduce the size of a logical volume without first understanding the
consequences to the data residing in the logical volume. Different actions may be
required depending on the nature of the data and the amount of space currently being
used.

Different actions are required depending on whether the logical volume contains a
database, a JFS file system, or a HFS file system. Likewise, the actions required for
a JFS file system that is currently only 25 percent full with little or no fragmentation
would be different than a JFS file system that is 90 percent full.

In many cases (as in that of a HFS file system), the actions could be as severe as
backing up the files or data, reducing the size of the logical volume, recreating the file
system, and restoring the files.

It is not the objective of this exercise to be able to determine the exact actions
required for a particular situation, but rather to understand that the appropriate actions
on the part of the customer are required.

In this exercise, you will be reducing the size of the logical volume containing the
HFS file system that you just extended in the previous exercise. In this case, because
of its file system type, after reducing the logical volume size, you will have to
recreate the file system and restore the data.

2. Use the lvreduce command to reduce the size of logical volume holding the HFS
file system that you created and then extended in earlier exercises from 24 MB to 12
MB.

# lvreduce –L 12 /dev/vg00/lvolX (where X matches lvol name


created earlier)

3. Use the lvdisplay command to verify the size of the newly reduced logical
volume.

# lvdisplay /dev/vg00/lvolX (where X matches lvol name


created earlier)

4. Now that you have reduced the logical volume size, use the newfs command as
shown below to recreate the file system in the logical volume. Create a HFS file
system in the logical volume named lvolX (where X is the lvol number created
earlier in this exercise).

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-10
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

# newfs –F hfs /dev/vg00/lvolX (where X matches lvol name


created earlier)

5. Use the mount command to do a one-time manual mount of the HFS file system.

# mount –F hfs /dev/vg00/lvolX /hfs_lvol


(where X is the lvol number created earlier)

Is the file named fstab.copy (copy of /etc/fstab) still in this file system?

Why or why not?

6. Use the umount command to unmount the HFS file system.

# umount /dev/vg00/lvolX (where X is the lvol number created


earlier)

TASK 4 SUMMARY

Prior to reducing a logical volume, you need to examine the current configuration with the
bdf and lvdisplay commands.

Reducing the size of the logical volume with the lvreduce command is easy, but once
again the task becomes more difficult when the data residing in the logical volume is
considered. The complexity of the task depends on the nature of the data in the logical
volume.

With a HFS file system, the operation involves backing up the data, unmounting the file
system with the umount command using lvreduce to reduce the size of the logical
volume, using newfs to recreate the file system, using the mount command to remount the
new file system, and then restoring the data that was backed up.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-11
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 5: REMOVING A LOGICAL VOLUME


The purpose of this task is to become familiar with the procedure for removing a logical
volume and to understand the consequences to the file system and/or data residing in the
logical volume.

1. Use the mount command to confirm that the two file systems (HFS and JFS) that we
have been using in these exercises are not mounted. Also use the cat command to
confirm that neither file system has an entry for automatic mounts in /etc/fstab.

# mount
# cat /etc/fstab

2. Remove both of the logical volumes containing these file systems with the
lvremove command.

# lvremove /dev/vg00/contig

# lvremove /dev/vg00/lvolX (where X matches lvol name


created earlier)

3. Use the vgdisplay command to confirm that the two logical volumes are now gone
from the root volume group (vg00).

# vgdisplay –v /dev/vg00 | more

4. Try mounting either of the two file systems with the mount command.

# mount –F hfs /dev/vg00/contig /hfs_lvol

Does it work?

Is the data on the logical volumes still available?

TASK 5 SUMMARY

Removing a logical volume is fairly easy, assuming of course that the customer has moved
the data elsewhere or no longer needs the data.

First, use the umount command to unmount the file system and then check to see if there is
an entry for the file system in /etc/fstab. If so, use vi to remove the entry. Finally, use
the lvremove command to remove the logical volume.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-12
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 6: CREATING A NEW VOLUME GROUP

The purpose of this task is to become familiar with the procedure for creating a new volume
group. New volume groups are often referred to as data volume groups because they are
separate from the root volume group vg00 and contain various types of customer file and
data.

1. Use the strings command to confirm that the only volume group defined on your
system is vg00 and to identify the physical volume that is a part of this volume
group. Then use the ll command to get the major and minor number for
/dev/vg00/group.

# strings /etc/lvmtab
# ll /dev/vg00/group

What physical volume is currently a part of vg00?

What is the major number for the /dev/vg00/group file?

What is the minor number for the /dev/vg00/group file?

2. Use the mkdir command to create a directory for vg01. Use the mknod command
to create a group file under this directory using the major number from above and
using the next available minor number (increment by one the two left-hand digits
following the 0x).

# mkdir /dev/vg01
# mknod /dev/vg01/group c 64 0x010000
(where 0x01… assumes no volume groups other than vg00)

3. Using ioscan, identify two physical volumes available on the system. Do not use
the physical volume identified earlier as a part of vg00.

# ioscan -fnC disk

Character and block device files for “First available disk”:

Character and block device file for “Second available disk”:

These two disks will be referred to throughout the remainder of this lab as “first
available disk” and “second available disk”.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-13
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

4. Use the pvcreate command on the character device file for the “first available
disk” to create a physical volume for use in a LVM volume group. Use the -f
option since the disk most likely has been used in prior LVM exercises.

# pvcreate –f /dev/rdsk/cXtYdZ (where X, Y, Z were


determined earlier)

5. Now use the vgcreate command to create a volume group named vg01 using the
block device file for the “first available disk”.

# vgcreate /dev/vg01 /dev/dsk/cXtYdZ (where X,Y,Z were


determined earlier)

6. Use the pvcreate command on the character device file for the “second available
disk” to create a physical volume for use in a LVM volume group. Use the -f
option since the disk most likely has been used in prior LVM exercises.

# pvcreate –f /dev/rdsk/cXtYdZ (where X, Y, Z were


determined earlier)

7. Now use the vgextend command to add the disk to the vg01 volume group using
the block device file for the “second available disk”.

# vgextend /dev/vg01 /dev/dsk/cXtYdZ (where X,Y,Z were


determined earlier)

8. Use the strings and vgdisplay commands to confirm that vg01 has been
created on the “first and second available disks” and is currently activated.

# strings /etc/lvmtab
# vgdisplay –v /dev/vg01 | more

TASK 6 SUMMARY

Prior to creating a new volume group, you need to determine the name for the new volume
group, choose a minor number for the group file for the new volume group, and identify the
disk or disks that will become a part of this volume group.

Use the strings command to determine what volume groups currently exist and which
physical volumes are being used by those volume groups. Neither the names nor physical
volumes shown here can be used in your new volume group.

Use the ll command to look at the minor number for the group file for each of the existing
volume groups. None of these minor numbers can be used for your new volume group.

Use the ioscan command to determine what disks are available on the system. Remember,
you cannot use any disk that is already a part of an existing volume group or that has any type
of non-LVM data that the customer needs to keep.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-14
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

Once you have all this information, you can create the new volume group. Use the mkdir
command to create a volume group directory under /dev. Then use the mknod command to
create the group file for the new volume group under this directory using the appropriate
minor number. Next, use the pvcreate command to prepare the new physical volumes.
Use the vgcreate command to create the new volume group. You can use the vgextend
command to add additional physical volumes in the new volume group.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-15
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 7: CREATING LVOLS IN THE NEW VOLUME GROUP

The purpose of this task is to become familiar with the procedure for creating logical volumes
in the new data volume group and to become more familiar with manipulating logical
volumes.

1. Use the lvcreate command as shown on the command line below to create a 50
MB logical volume in the vg01 volume group with the name datalvol1. Have
the logical volume use contiguous allocation policy.

# lvcreate –L 50 –C y –n datalvol1 /dev/vg01

2. Verify with the lvdisplay command that the logical volume has been created with
the desired name, size, and allocation policy.

# lvdisplay /dev/vg01/datalvol1

3. Now create another 50MB logical volume in vg01 with the name datalvol2. Use
the default allocation policy for this logical volume.

# lvcreate –L 50 –n datalvol2 /dev/vg01

4. Once again, use lvdisplay to verify that the logical volume has been created with
the desired name, size, and allocation policy.

# lvdisplay /dev/vg01/datalvol2

5. Try using the lvextend command to extend the size of datalvol1 to 100 MB.

# lvextend –L 100 /dev/vg01/datalvol1

This should not work because datalvol1 has a contiguous allocation policy and
the physical extents allocated to datalvol2 are in the way.

6. Try using the lvextend command to extend the size of datalvol2 to 100MB.

# lvextend –L 100 /dev/vg01/datalvol2

This operation should succeed because datalvol2 uses only strict allocation policy
and there is enough space available on the disk.

7. Use the lvremove command to remove datalvol2 from vg01.

# lvremove /dev/vg01/datalvol2

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-16
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

8. Now try using the lvextend command to extend the size of datalvol1 to 100
MB.

# lvextend –L 100 /dev/vg01/datalvol1

This should work this time because datalvol1 has a contiguous allocation policy
and the required physical extents are available to extend the logical volume.

9. In order to save time, we did not take the time to create file systems in the logical
volumes that we created, create mount point directories, or mount file systems as a
part of this exercise.

Without doing any of these activities, can we store files in datalvol1?

TASK 7 SUMMARY

The actions required to add logical volumes in a data volume group are the same as those
outlined earlier in Task 1.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-17
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 8: REMOVING VOLUME GROUPS

The purpose of this exercise is to become familiar with one procedure for removing a data
volume group.

1. Use the command vgdisplay to list the logical volumes that are still defined in the
vg01 volume group and to see which physical volumes are still a part of the volume
group.

# vgdisplay –v /dev/vg01 | more

2. Now try using the vgremove command to remove vg01.

# vgremove /dev/vg01

3. From the error message, it is apparent that we need to use the lvremove command
to remove the logical volumes that reside on this volume group. Use the lvremove
command to remove the logical volume(s) at this point.

# lvremove /dev/vg01/datalvol1

4. Once again, try using the vgremove command to remove vg01.

# vgremove /dev/vg01

5. Once again, from the error message it is apparent that we have to use the vgreduce
command to reduce vg01 to one disk before it can be removed. Use the vgreduce
command to remove one of the physical volumes from vg01.

# vgreduce /dev/vg01 /dev/dsk/cXtYdZ (subsitute X, Y,


and Z as appropriate)

6. Once again, try using the vgremove command to remove vg01. Since you only
have one physical volume present in the volume group, it should now work.

# vgremove /dev/vg01

TASK 8 SUMMARY

Removing a volume group is fairly easy, assuming of course that the customer has moved the
data elsewhere or no longer needs the data.

First, use the umount command to unmount all file systems residing in logical volumes in
the volume group, and then check to see if there are an entries for the file systems in
/etc/fstab. If so, use vi to remove the entries. Remember, you will have to use the
bdf command and the display commands vgdisplay, pvdisplay, and lvdisplay to
obtain the information needed to do this task.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-18
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

Next, use the lvremove command to remove each of the logical volumes configured in the
volume group.

Once you have removed the logical volumes, you can use the vgreduce command to
remove physical volumes from the volume group until you reduce the volume group to one
physical volume. Then use the vgremove command to remove the volume group.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-19
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 9: EXTENDING THE ROOT VOLUME GROUP

The purpose of this exercise is to become familiar with the procedure for adding disks to the
root volume group.

1. Use the strings command to confirm that the only volume group defined on your
system is vg00 and to identify the physical volume that is a part of this volume
group.

# strings /etc/lvmtab

What physical volume is currently a part of vg00?

2. Using ioscan, identify a physical volume available on the system. Do not use the
physical volume identified earlier as a part of vg00.

# ioscan -fnC disk

Character and block device files for “First available disk”:

This disk will be referred to throughout the remainder of this lab as “first available

3. Use the pvcreate command on the character device file for the “first available
disk” to create a physical volume for use in a LVM volume group. Use the -f
option since the disk most likely has been used in prior LVM exercises.

# pvcreate –f /dev/rdsk/cXtYdZ (where X, Y, Z were


determined earlier)

4. Now use the vgextend command to add the disk to the vg00 volume group using
the block device file for the “first available disk”.

# vgextend /dev/vg00 /dev/dsk/cXtYdZ (where X,Y,Z were


determined earlier)

5. Use the strings and vgdisplay commands to confirm that vg00 now includes
the “first available disk”.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-20
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 9 SUMMARY

Use the strings command to determine what volume groups currently exist and which
physical volumes are being used by those volume groups. Use the ioscan command to
determine what disks are available on the system. Remember, you cannot use any disk that is
already a part of an existing volume group or that has any type of non-LVM data that the
customer needs to keep.

Next, use the pvcreate command to prepare the new physical volume and then use the
vgextend command to add the additional physical volume to the root group.

In order to use the new space, logical volumes need to be extended to or created on the new
physical volume.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-21
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 10: REDUCING THE ROOT VOLUME GROUP

The purpose of this exercise is to become familiar with the procedure for removing disks
from the root volume group.

1. Use the strings command to identify the physical volumes that are a part of vg00.

# strings /etc/lvmtab

2. Now use the pvdisplay command to examine each of the physical volumes in
vg00 to determine which physical volume doesn’t have any extents allocated to
logical volumes in vg00. You should find that the disk that we added in the
previous exercise doesn’t have any allocated extents.

# pvdisplay /dev/dsk/cXtYdZ (where X, Y, and Z were


determined above)

# pvdisplay /dev/dsk/cXtYdZ (where X, Y, and Z were


determined above)

3. Use the vgreduce command to remove the physical volume identified in above
step.

# vgreduce /dev/vg00 /dev/dsk/cXtYdZ (where X, Y, and Z


were determined above)

TASK 10 SUMMARY

Reducing the root volume group involves taking a physical volume that has no allocated
extents and removing it from the volume group with the vgreduce command. The
pvdisplay command can be used to verify that the physical volume has no allocated
extents.

If a physical volume has allocated extents, it cannot be removed from the volume group until
the extents are freed as a part of a series of operations using commands like lvremove,
lvreduce, and pvmove.

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-22
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

TASK 11: DO IT ON YO UR OWN

To this point you have been given all the commands to complete the various tasks associated
with using LVM. In this task, try performing those tasks without the commands given. You
can always refer back to previous sections. As in the previous exercises, always use the
appropriate display commands to verify your work at the end of each step.

1. Confirm that your system has only one volume group (vg00). If there are other volume
groups, take the appropriate steps to remove those volume groups.

Command:

2. Confirm that the root volume group (vg00) contains only one physical volume. If there
are other physical volumes present, take the appropriate steps to remove those physical
volumes.

Command:

3. Add a second physical volume to the root volume group (vg00).

Command:
Command:

4. Create a 24 Mbyte logical volume named mylvol in vg00 and determine which
physical volume(s) this new logical volume resides on.

Command:
Command:

5. Create a second 24 Mbyte logical volume in vg00, using the default logical volume
name, and placing the logical volume on the second physical volume in the volume
group. Confirm that the newly created logical volume resides on the second physical
volume.

Command:
Command:
Command:

6. Create file systems in the two logical volumes created earlier. Make one JFS and the
other one HFS. Create mountpoint directories for these two file systems and mount them

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-23
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

WITHOUT creating entries in /etc/fstab.

Command:
Command:
Command:
Command:
Command:
Command:

7. Take the appropriate steps to remove the logical volume named mylvol.

Command:
Command:

8. Take the appropriate steps to remove the second physical volume in vg00 from the
volume group.

Command:
Command:
Command:

9. Create a new two disk volume group named myvg.

Command:
Command:
Command:
Command:
Command:
Command:

10. Create a 24 Mbyte logical volume named hsdlvol in the myvg volume group, making
sure that all extents will be allocated to one disk in the volume group.

Command:
Command:

11. Extend the hsdlvol logical volume to 48 Mbytes, but this time make sure that the
additional extents are allocated from the other disk in the volume group.

Command:

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-24
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 15: Lab

12. Create a JFS file system in hsdlvol. Mount this file system to a mountpoint directory
called hsd without putting an entry in /etc/fstab.

Command:
Command:
Command:

13. Take the steps necessary to reduce the number of physical volumes in the volume group
myvg to one. Make sure that when you are finished you still have a logical volume
named hsdlvol although it will now only be 24 Mbytes.

Command:
Command:
Command:
Command:

14. Can you still mount hsdlvol? Why or why not?

15. Can you successfully fsck hsdlvol? Why or why not?

16. Remove the volume group myvg.

Command:
Command:

Date 13-Nov-00 CES2-DISTANCELVM


15.doc HSD Field Development
15-25
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 16: vgimport/vgexport

Module 16

vgimport/vgexport

Date 13-Nov-00 CES2-DISTANCELVM


16.doc HSD Field Development
16-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 16: vgimport/vgexport

Exporting a Volume Group


# vgexport vg01

System A System A

vg01

VGEXPORT

The vgexport process provides a means to remove a volume group from a system but
retain ALL of the LVM metadata and USER data on the physical volume.

What is removed is the volume group’s information from /etc/lvmtab, all kernel
information, and the volume group’s /dev/<vgname> directory.

Since all of the data is intact, the volume group can be imported on the original or another
system. vgexport can also be used in place of vgreduce/vgremove to quickly remove
a volume group from a system.

The -p (preview) option will show what actions the command will do but does not perform
any action.

Date 13-Nov-00 CES2-DISTANCELVM


16.doc HSD Field Development
16-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 16: vgimport/vgexport

Importing a Volume Group


# mkdir /dev/vg01
# mknod /dev/vg01/group
# vgexport [-m mapfile] vg01 # vgimport [-m mapfile] vg01 {pvs}

System A System B

vg01 vg01

VGIMPORT

The vgimport process allows the addition of a volume group which has already been
configured on a system. The system which created the volume group could be a completely
different system.

There are several options to vgexport and vgimport. From vgexport, the -m option
creates an ASCII text mapfile containing the logical volume names for this volume group.
This is only necessary if you are using non-standard logical volume names. This is placed on
the new system (ftp, tar, …) to which this volume group will be attached. Then
vgimport will reference this file for the logical volume names to use.

On the node to import to, the volume group directory and group file must be created prior to
the vgimport command. Each physical volume in the volume group MUST be specified;
however, if the volume group was exported with the -s option, importing with the -s
option will search all buses for any physical volume with this volume group ID (VGID). In
order for this to work, you MUST use the mapfile option (-m).

The -p (preview) option will show what actions the command will do but does not perform
any action.

Date 13-Nov-00 CES2-DISTANCELVM


16.doc HSD Field Development
16-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 16: vgimport/vgexport

Using vgimport to Recover (step 1)


#vgscan

Root
Vg??
lvol1
Lvol?
lvol2
Lvol?
lvol3

Using vgimport to recover a lost volume group involves two steps:

1) vgscan

This scans the disks and suggests which groups of disks should be vgimport’ed.

Date 13-Nov-00 CES2-DISTANCELVM


16.doc HSD Field Development
16-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 16: vgimport/vgexport

Using vgimport to Recover (step 2)


#vgimport vg01 /dev/dsk/c0t2d0

Root
Vg01
lvol1
lvol2 Lvol1

lvol3 Lvol2

2) vgimport

The group file must be created first with mknod before doing this.

The vgimport command needs the desired name of the volume group and the device files,
as found by vgscan, of all its Physical Volumes. vgimport will then create all the logical
volume device files required and update /etc/lvmtab.

NOTE If the logical volumes had names rather than lvol1, lvol2 etc., these can simply
be renamed using mv. Alternatively, if a mapfile had been available then it
could be specified with vgimport -m .

Date 13-Nov-00 CES2-DISTANCELVM


16.doc HSD Field Development
16-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 16: vgimport/vgexport

USING VGIMPORT AND VGEXPORT EFFECTIVELY

The following is taken from an article in PA-NEWS number198.

A system recovery whic h involves a re-install, many VG’s, and re-ordered Instance’s (thanks
to auto-config) can be tedious.

The utility most people will try to use first is vgscan.

However, vgscan only works if:

• All LVM device files for the VG exist


• All the information contained in those device files is scrupulously correct (minor
numbers, etc).
• The /etc/lvmtab file contains no mention of the VG
• The /etc/lvmtab file does not have the PV device file Instance’s assigned to another
VG

If the LVM information is incorrect or unknown then it is better to use vgexport and
vgimport than to try to guess it using rmsf/insf/vgscan.

Case #1 The Root disc crashes requiring a re-install

After the re-install you realize that the Instance numbers have changed.

Normally, a complete restore from a current full backup of the root disc (including device
files) followed by a reboot should clear this up, but in real life the customer never has one
when you need it. (Murphy’s Law ...)

What are your alternatives?

1) If the Volume Group files exist, you could use rmsf and insf to get all the device files
correctly assigned again and then vgscan. If the Volume Group files do not exist but
their names and minor numbers are known then they could be recreated, followed by
rmsf/insf/vgscan as above. This may work if the number of discs involved is
small.

2) You could recreate all the VG’s and their LV’s and then restore the data. It’s reliable but
not very practical.

3) You could use vgexport/vgimport. This way you don’t have to worry about
Instances; you only need to know which disks belonged to which VG.

Date 13-Nov-00 CES2-DISTANCELVM


16.doc HSD Field Development
16-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 16: vgimport/vgexport

Example 1

/dev/dsk/c0t5d0, c0t6d0, and c0t4d0 are all members of /dev/vg05. The system
crashes and needs to be restored . After the re-install, c0t5d0 is OK but c0t6d0 and
c0t4d0 have been reconfigured as c0t10d0 and c0t12d0 respectively. Of course, the
customer does not have a backup of /dev or /etc/lvmtab, so /dev/vg05 does not get
activated at boot time.

You have to recreate the Volume Group.

Example 1 Answer

As there is no backup, the problems lie in determining the volume group number and name,
and finding out which discs were originally in the configuration.

To recreate this group:

1) # vgscan
this will tell you which discs have the same associated information

2) # mkdir /dev/vg05; mknod /dev/vg05/group c 64 0x0y0000


substitute VG number for y

3) # vgimport /dev/vg05 /dev/dsk/c0t5d0 /dev/dsk/c0t10d0 /dev/dsk/c0t12d0

Example 2
Instead of a crash, the disks were moved around onto different busses to improve utilization.
The mappings after modification and boot are also identical to the example above. This time
2 of the 3 PV’s, that make up the group, have not lined up with their original instances,
quorum was not met so the VG was not activated. Restore the LVM configuration.

Example 2 Answer

In this case, the directory /dev/vg05 exists and includes its LV device files. Also, the VG
and its original PV’s are listed in /etc/lvmtab. We can use vgexport/vgimport to
include the PV’s in the VG with their new LU numbers.

1) Note the minor number of /dev/vg05/group - this will be removed by vgexport.

2) If the VG did get activated somehow (manually or quorum was met) de-activate it using
vgchange -a n /dev/vg05

3) vgexport -m /tmp/mapfile /dev/vg05

4) mkdir /dev/vg05; mknod /dev/vg05/group c 64 0xXX0000


substitute VG number for XX as found in
step 1

Date 13-Nov-00 CES2-DISTANCELVM


16.doc HSD Field Development
16-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 16: vgimport/vgexport

5) vgimport /dev/vg05 /dev/dsk/c0t5d0 /dev/dsk/c0t10d0 /dev/dsk/c0t12d0

6) vgchange -a y /dev/vg05

Case #2 Could we have a spare disc ready for online replacement of a failed disc?

This is possible, but while the disc was being replaced the VG would not be available.

The better (and more costly) solution is LVM mirroring.

Example 3

/dev/dsk/c0t6d0, c0t4d0 and c0t8d0 are members of /dev/vg03. /dev/dsk/c0t9d0 is


an empty disc configured into the system but not a member of any group. c0t4d0 has
problems: recover using c0t9d0.

Example 3 Answer

Replace it with c0t9d0:

1) Note the minor number of /dev/vg03/group - this will be removed by vgexport.

2) If the VG did get activated somehow (manually or quorum was met) de-activate it using
vgchange -a n /dev/vg03

3) vgexport -m /tmp/mapfile /dev/vg03

4) mkdir /dev/vg03 ; mknod /dev/vg03/group c 64 0x0y0000


substitute VG number for y

5) vgcfgrestore /dev/vg03 -o /dev/dsk/c0t4d0 /dev/dsk/c0t9d0

6) vgimport /dev/vg03 /dev/dsk/c0t6d0 /dev/dsk/c0t9d0 /dev/dsk/c0t8d0

7) vgchange -a y /dev/vg05

8) Use pvdisplay -v /dev/dsk/c0t9d0 | more to see which LV’s were affected by the
loss of this disc, and restore data to those LV’s.

Date 13-Nov-00 CES2-DISTANCELVM


16.doc HSD Field Development
16-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 16: vgimport/vgexport

Volume Group Identifier (VGID)

Software ID ( swid)

Timestamp

PVRA VGID
VGRA

vg01

The Volume Group Identifier (VGID) is comprised of two (2) words. The first word is the
machine software id (SWID) stored in Stable Storage. This strings should be unique among
ALL HP computers since it is keyed from the model number and serial number of a machine.

The second word is a timestamp at which time the vgcreate command was performed. The
VGID is stored in the PVRA and VGRA LVM metadata areas. The VGID is stored in the
/etc/lvmtab file on the system for each individual volume group.

If the VGID in the /etc/lvmtab file does NOT match the VGID for a volume group, that
volume group will NOT activate.

When vgexport is performed on a volume group, all of the LVM metadata structures,
including the VGID, are left intact on all of the volume group’s physical volumes. The
volume group is removed from the kernel and the /etc/lvmtab file.

The vgimport process works by writing the VGID for a volume group into the
/etc/lvmtab file. The SWID of the machine that created the volume group DOES NOT
have to be the same as the machine that imports the volume group.

Date 13-Nov-00 CES2-DISTANCELVM


16.doc HSD Field Development
16-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 16: vgimport/vgexport

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


16.doc HSD Field Development
16-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 17: PVlinks

Module 17

PVlinks

Date 13-Nov-00 CES2-DISTANCELVM


17.doc HSD Field Development
17-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 17: PVlinks

Physical Volume Links

PHYSICAL VOLUME LINKS

Physical Volume links (PVlinks) allow multiple paths to a disk.

• up to 4 to all boot disks depending on the hardware used for example


– 1 boot disk in root VG can have 4 links: or
– 2 boot disks in root VG can have 2 links each

• up to 8 for data disks depending on the hardware used.

• works with SCSI, HP-FL disks and disk arrays.


– A3231A/A3232A disk array (NIKE) can have dual controllers - connect
one to each SPU interface card. Autotrespass must be enabled.

• If the primary link fails LVM will re-route I/O to a surviving link
– Only one link can be active.
– Does not perform any kind of automatic I/O load balancing between links.

• LVM does not maintain information about which link is the primary across reboots
– chooses first link device file from /etc/lvmtab as primary.

Date 13-Nov-00 CES2-DISTANCELVM


17.doc HSD Field Development
17-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 17: PVlinks

• For dual controller devices pvchange has options to control behavior of PVLinks
– /sbin/pvchange -s pv_path
Manually switch access to the device to the path named by pv_path,
specifying it as the primary path.

– /sbin/pvchange -S [y|n] pv_path


controls whether or not the system will automatically switch back from the
current controller to the original controller after the original controller has
recovered from a failure. Default is y.

CAUTION From PA-NEWS #234


With HP-UX 10.20 ONLY, if either patch PHKL_8172, 9530 or 9000 are
installed on a system that uses Nike Disk Arrays with Primary/Alternate Links
configured, [...] problems will be seen IF the embedded Chassis Serial
Number of the Nike Array is NOT unique.
...LVM makes use of the ‘unique’ embedded serial number of the Nike chassis.
[as well as the PVID]. This serial number is written on the database mechs at
the factory prior to shipping.
Do not try to copy microcode from one array to another by swapping one of
the database drives!

Creating Physical Volume Links

There are no special commands or options to enable this feature: its behavior depends on the
current link state of the volume group. For a new volume group the commands are as follows.

1. Create the physical volume using only one of the device files

pvcreate /dev/rdsk/c2t6d0

2. Create the volume group specifying the device file of one of the hardware paths (this
will be the primary link although it is not known as this yet)

vgcreate /dev/vgdbase /dev/dsk/c2t6d0

3. Create the secondary link by vgextend’ing the volume group specifying the device
file of the other hardware path to the disk. Do not run pvcreate on this device file!

vgextend /dev/vgdbase /dev/dsk/c3t6d0

LVM reads the physical volume ID from the disk and, seeing that it already belongs to the
same volume group, it does not increase the number of physical extents available.

Date 13-Nov-00 CES2-DISTANCELVM


17.doc HSD Field Development
17-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 17: PVlinks

This time if you do a vgdisplay you will see the following output :

# vgdisplay -v
--- Volume groups ---
VG Name /dev/vgdbase
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 2000

--- Physical volumes ---


PV Name /dev/dsk/c2t6d0
PV Name /dev/dsk/c3t6d0 Alternate link
PV Status available
Total PE 249
Free PE 109

Here you can see that the volume group only has one disk but there are two device files with
different card instance numbers. The first one is known as the primary link and is the link
which is currently active. The second device file which is labeled “Alternate link” is
in fact the secondary link. If there were more secondary links then these would be listed here
too.

Removing a Physical Volume Link

Removing a PVLink is easy and is the same for both primary and secondary links.

Using vgreduce remove the device file from the volume group which points to the link you
want to remove :

vgreduce /dev/vgdbase /dev/dsk/c3t6d0

This will remove the secondary link from the previous example.

Moving the primary (active) PVLink

Let’s say that you have a NIKE disk array and the alternate link is pointing to the controller
(SP) that the LUN is bound to. This will cause lots of autotresspass errors. To fix this you
need to swap the primary and alternate link definitions permanently.

Date 13-Nov-00 CES2-DISTANCELVM


17.doc HSD Field Development
17-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 17: PVlinks

To do this you just need to do:

1. vgreduce /dev/vgdbase /dev/dsk/cXtYd0


(/dev/dsk/cXtYd0 is the link that is currently the primary)
LVM will then switch to using the alternate link which now becomes the primary.

Then using the same device file

2. vgextend /dev/vgdbase /dev/dsk/cXtYd0

This now defines that path to be the alternate link. The order of the files in the
/etc/lvmtab file is now reversed.

Although this should take place without too much interruption to the system, it would be
prudent not to do this while the disk array is in use, if this is at all possible.

Date 13-Nov-00 CES2-DISTANCELVM


17.doc HSD Field Development
17-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 17: PVlinks

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


17.doc HSD Field Development
17-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 18: pvmove

Module 18

pvmove

Date 13-Nov-00 CES2-DISTANCELVM


18.doc HSD Field Development
18-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 18: pvmove

pvmove

# pvmove <old_pv> <new_pv>


# pvmove <logical volume> <old_pv> <new_pv>

PV 0 PV 1

vg01 LV Extents vg01


USER Data

MOVING DATA IN LOGICAL VOLUMES FROM DISK TO DISK

You can use the pvmove(1M) command to move data contained in logical volumes from one
disk to another disk within a volume group.

For example, you can move a logical volume’s data from one disk to another to use the space
on the first disk for some other purpose.

Also, you can move all data from one disk to another. You might want to do this, for
example, so you can remove a disk from a volume group. After removing the logical volume
data off a disk, you can then remove the disk from the volume group.

Example:
Moving Logical Volume Data from One Disk to Another

Suppose you want to move the data in a logical volume, /dev/vg01/markets, from the
disk /dev/dsk/c0t3d0 to the disk /dev/dsk/c0t4d0.

Note that when you issue the command, you must specify a specific logical volume on the
source disk with the -n flag. You must also specify the source disk first in the command
line. For example,

# pvmove -n /dev/vg01/markets /dev/dsk/c0t3d0 /dev/dsk/c0t4d0

Date 13-Nov-00 CES2-DISTANCELVM


18.doc HSD Field Development
18-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 18: pvmove

Example:
Moving All Data on One LVM Disk to Another

If you want to move all the data on a given disk within the same volume group, you can use
the pvmove(1M) command to move the data to other specific disks or let LVM move the
data to available space within the volume group.

To move data off disk /dev/dsk/c0t4d0 and let LVM transfer the data to available space
in the volume group, you could enter:

# pvmove /dev/dsk/c0t4d0

To move data off disk /dev/dsk/c0t4d0 to the destination disk /dev/dsk/c0t5d0,


enter:

# pvmove /dev/dsk/c0t4d0 /dev/dsk/c0t5d0

If space doesn’t exist on the destination disk, the pvmove command will not succeed.

Date 13-Nov-00 CES2-DISTANCELVM


18.doc HSD Field Development
18-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 18: pvmove

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


18.doc HSD Field Development
18-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 19: Change Commands

Module 19

Change Commands

Date 13-Nov-00 CES2-DISTANCELVM


19.doc HSD Field Development
19-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 19: Change Commands

Change Commands
• Physical Volume - pvchange
/dev/dsk/cXtYdZ

• Volume Group - vgchange


/dev/dsk/cXtYdZ /dev/dsk/cXtYdZ

/dev/vgXX
PV 0 PV 1

• Logical volume - lvchange


lvol1

lvol2

lvol3

At times it is desired or necessary to change certain attributes of physical volumes, volume


groups, or logical volumes. These are done by the different change commands - pvchange,
vgchange, and lvchange.

pvchange alters the physical volume attributes. These include the extent allocation for this
pv, IO timeouts, and switching for pvlinks.

vgchange is the most often used of the change commands for it MUST be used to activate
and deactivate a volume group. There are other options which can alter the way the volume
group is activated. Remember, vgchange works on the entire volume group.

lvchange alters a specific logical volume. The changeable attributes include allocation
polices (e.g. contiguous/noncontiguous), striping and mirroring, mirroring parameters, and
activation.

Date 13-Nov-00 DEX2-DISTANCELVM


19.doc HSD Field Development
19-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

Module 20

Lab

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

SETUP FOR DAY 3 LABS

Day 3 labs require an HP9000 computer with HP-UX 10.20 or HP-UX 11.0 installed and at
least three (3) physical volumes. A minimum of two (2) volume groups is needed, vg00 and
a second (e.g. vg01). vg00 can have the standard logical volumes while vg01 should have
at least three (3) logical volumes setup with NON-STANDARD names (first, second, third).
Optionally, an external disk in a pvlinks configuration is needed for the pvlinks lab.

TASK 1: EXPORTING AND IMPORTING VOLUME GROUPS

The purpose of this task is to properly export and import volume groups, how to modify a
mapfile, and identify how the export/import process works.

1. Determine the volume group name, volume group number, and physical volume(s) of
your non-root volume group.

# strings /etc/lvmtab

Volume group name: ___________________________


Volume group number: _________
Physical Volume(s): ___________________________________________

2. Determine the logical volume names associated with this volume group. Replace
<vgname> with the volume group name obtained in step 1.

# ls /dev/<vgname>
# vgdisplay –v <vgname> | more

Record the lvol names and sizes:

Logical Volume Name Size

3. Using vgexport, try to remove the non-root volume group. Create a map file as part of
the process. Replace <vgname> with the volume group name obtained in step 1.

# vgexport -v -m /tmp/<vgname>.map <vgname>

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

Did it work? ______

Why or why not?

4. Deactivate the volume group.

# vgchange –a n <vgname>

5. Use vgexport to try and remove the non-root volume group. Create a map file as part
of the process.

# cd /
# vgexport -v -m /tmp/<vgname>.map <vgname>

Did it appear to work? __________

6. Verify this volume group is no longer configured on the system.

# strings /etc/lvmtab

7. Using the mapfile created in step 5 and physical volume(s) identified in step 1, try to
import the volume group. Substitute cXtYdZ with the appropriate CARD INSTANCE
(X), TARGET ADDRESS (Y), and LUN NUMBER(Z).

NOTE If you have more than one physical volume, you need to add each
/dev/dsk/cXtYdZ entry.

# vgimport -v -m /tmp/<vgname>.map <vgname> \


/dev/dsk/cXtYdZ

What error did you receive?

8. Recreate the volume group directory and control file. Then, using the mapfile created in
step 5 and the physical volume(s) identified in step 1, import this volume group. Replace
the “?” with the volume group number obtained in step 1. Substitute cXtYdZ with the
appropriate CARD INSTANCE (X), TARGET ADDRESS (Y), and LUN NUMBER(Z).

NOTE If you have more than one physical volume, you need to add each
/dev/dsk/cxtYdZ entry.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

# mkdir /dev/<vgname>
# mknod /dev/<vgname>/group c 64 0x0?0000
# vgimport -v -m /tmp/<vgname>.map <vgname> \
/dev/dsk/cXtYdZ

Did it appear to work? ___________

What message(s) did you receive?

9. Try displaying the volume group’s information.

# vgdisplay <vgname>

What error message did you receive?

10. Activate <vgname>.

# vgchange –a y <vgname>

Verify the volume group has been activated.

# vgdisplay –v <vgname>

What are the names of the logical volumes that were imported?

11. Perform a vgcfgbackup for <vgname>.

# vgcfgbackup <vgname>

To this point we have exported and imported a volume group. In order to export a volume
group, the volume group MUST be deactivated. This includes all filesystems being
unmounted. A mapfile contains the names of the logical volumes.

In this next procedure, we will export the volume group, modify the mapfile to change
the logical volume names, and verify the import process uses the new names.

12. Deactivate <vgname>.

# vgchange -a n <vgname>

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

13. Export <vgname> to remove the volume group from the system.

# cd /
# vgexport <vgname>

14. Use vgimport to recreate <vgname>. Do NOT use the map file.

# mkdir /dev/<vgname>
# mknod /dev/<vgname>/group c 64 0x0?0000
# vgimport <vgname> /dev/dsk/cXtYdZ

15. Activate the volume group.

# vgchange –a y <vgname>

Are all the lvols there as expected? __________

What are the logical volume names?

Do these correspond with the original names? _________

16. Edit the mapfile that was created in step 5. Make the entries look like the following:

1 fourth
2 fifth
3 sixth

17. Deactivate <vgname>

18. Export <vgname> again to remove it from the system.

# cd /
# vgexport <vgname>

19. Import <vgname> again but use the mapfile this time.

# mkdir /dev/<vgname>
# mknod /dev/<vgname>/group c 64 0x0?0000
# vgimport -m /tmp/<vgname>.map <vgname> \
/dev/dsk/cXtYdZ

20. Activate the volume group.

# vgchange –a y <vgname>

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

What are the lvol names and their respective Logical Entry Numbers (from the minor
number)?

The “mapfile” is an ASCII text file which can be edited. The information stored there is
the logical volume names for the volume group. These are obtained from the special files
listed under the volume group’s directory (e.g. /dev/<vgname>).

21. Let’s try an experiment. What happens if there is a “false” logical volume?

Create a false logical volume, lvol4, for <vgname>.

# mknod /dev/<vgname>/lvol4 b 64 0x0?0004


# mknod /dev/<vgname>/rlvol4 c 64 0x0?0004

22. Deactivate the volume group <vgname>.

# vgchange -a n <vgname>

Recreate the volume group’s mapfile (-m) but do NOT remove the volume group (-p).

# vgexport -p -v -m /tmp/<vgname>.map.new <vgname>

View the mapfile.

Is lvol4 there? ________

23. Remove <vgname>.

# vgexport <vgname>

24. Import <vgname>.

# mkdir /dev/<vgname>
# mknod /dev/<vgname>/group c 64 0x0?0000
# vgimport -m /tmp/<vgname>.map.new <vgname> \
/dev/dsk/cXtYdZ

What error did you receive (disregard the vgcfgbackup message)?

25. Activate the volume group.

# vgchange -a y <vgname>

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

26. Display the volume group information.

# vgdisplay <vgname>

What does “Cur(rent) LV” and “Open LV” indicate? ________, ________

Are these correct? __________

When exporting a volume group and specifying a mapfile, the export process looks at the
volume group’s directory for logical volume names and minor numbers. It does NOT
verify they are in existence; however, the import process WILL verify the existence and
flag an error if one or more is NOT seen.

Summary

The export process can be used to:


• quickly remove a volume group from a system
• prepare a volume group to be imported for use on another system

The import process will add an already configured volume group on to a system. The volume
group could have been created on a completely different system. It accomplishes this by way
of a Volume Group ID (VGID). When a volume group is created by way of the vgcreate
command, a unique VGID is assigned and stored on every physical volume in that volume
group. The VGID is also stored in the /etc/lvmtab file; however, strings
/etc/lvmtab does not show this information since it is NOT in ASCII text format.

The “mapfile”, which is created with the -m option, is an ASCII text file which can be
edited. The information stored there is the logical volume names for the volume group.
These are obtained from the special files listed under the volume group’s directory (e.g.
/dev/<vgname>). It is NOT mandatory to have a map file in order to import a volume
group; however, it makes the process easier when using NON-STANDARD logical volume
names.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

TASK 2: MOVING PHYSICAL VOLUME EXTENTS (PVMOVE)

The purpose of this task is to move physical extents from one physical volume to another.
This can be useful for putting a logical volume’s extents on one physical volume (primarily
in mirroring) or to replace an old physical volume (e.g. disk) with a new physical volume
(e.g. disk).

1. Remove any non-root volume groups from your system to free up the physical volume(s).

# cd /
# vgexport <vgname>

2. Identify the root volume group’s (vg00) physical volume and list out the available
extents on the root physical volume.

# strings /etc/lvmtab
# pvdisplay /dev/dsk/cXtYdZ

NOTE If there are NO free extents on the root physical volume, you will have to free
some extents by reducing an existing logical volume or you will need two (2)
free disks for the following tasks.

3. Create a logic al volume in the root volume group named “move”. Make the size 20M if
five (5) or more physical extents are available. Otherwise, use what is available (e.g.
12M or 3 physical extents).

# lvcreate -L 20 -n move vg00

4. Extend the root volume group to include an unused physical volume.

# pvcreate -f /dev/rdsk/cXtYdZ
# vgextend vg00 /dev/dsk/cXtYdZ

5. Extend /dev/vg00/move to 60M. Ensure the new extents are on the new physical
volume.

# lvextend -L 60 /dev/vg00/move /dev/dsk/cXtYdZ

6. Verify that both physical volumes contain a part of /dev/vg00/move. List the
number of physical extents for /dev/vg00/move from each physical volume.

# pvdisplay -v /dev/dsk/cXtYdZ | more ç first root


physical volume

Number of /dev/vg00/move PEs: ________

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

# pvdisplay -v /dev/dsk/cXtYdZ | more ç second root


physical volume

Number of /dev/vg00/move PEs: ________

7. Create a filesystem and mount_point_directory for /dev/vg00/move , mount it, and


then copy some data to /dev/vg00/move .

# newfs -F vxfs /dev/vg00/rmove


# mkdir /vg00_move
# mount /dev/vg00/move /vg00_move
# cp -R /etc /vg00_move

8. Use the pvmove command to put all of the physical extents for /dev/vg00/move on
the second drive.

# pvmove -n /dev/vg00/move /dev/dsk/<SOURCE_phys_vol> \


/dev/dsk/<DESTNATION_phys_vol>

9. Verify that all the extents are now on the second drive.

# pvdisplay -v /dev/dsk/cXtYdZ | more ç first root physical


volume

Number of /dev/vg00/move PEs: ________

# pvdisplay -v /dev/dsk/cXtYdZ | more çsecond root


physical volume

Number of /dev/vg00/move PEs: ________

10. Verify that the data moved.

# ls /vg00_move

11. Let’s try moving the physical extents of /dev/vg00/move to a physical volume in
another volume group. First, identify a physical volume from a different volume group.

# strings /etc/lvmtab

12. Create a second volume group, vg01, with two (2) logical volumes. Create a VxFS file
system on both logical volumes and create mount_points. Replace “<pv_to_use>”
with an available physical volume (HINT: Use ioscan and strings).

# mkdir /dev/vg01
# mknod /dev/vg01/group c 64 0x010000
# pvcreate –f /dev/rdsk/<pv_to_use>

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

# vgcreate vg01 /dev/dsk/<pv_to_use>


# lvcreate –L 20 vg01
# lvcreate –L 20 vg01
# newfs –F vxfs /dev/vg01/rlvol1
# newfs –F vxfs /dev/vg01/rlvol2
# mkdir /vg01_lvol1
# mkdir /vg01_lvol2

13. Now, try moving the extents from the root volume group’s physical volume to the other

# pvmove -n /dev/vg00/move <root_second_phys_vol> \


<vg01_phys_vol>

What error did you receive?

Using the pvmove command can move a logical volume’s extents and user data from
one physical volume to another. This could be particularly useful for having all the
extents on one physical volume. The source and destination physical volumes MUST be
in the same volume group.

14. Let’s move an entire physical volume to another physical volume. Remove
/dev/vg00/move and the second physical volume from vg00.

# umount /vg00_move
# lvremove /dev/vg00/move
# vgreduce vg00 /dev/dsk/cXtYdZ

15. Add the physical volume you just removed to the non-root volume group used in step 13.

# vgextend <vgname> /dev/dsk/cXtYdZ

Obtain a listing of what logical volumes are contained on both physical volumes.

# pvdisplay -v /dev/dsk/cXtYdZ | more ç first physical


volume

# pvdisplay -v /dev/dsk/cXtYdZ | more ç second physical


volume

The first physical volume should have some PEs assigned to logical volumes. If not,
create two logical volumes on the first physical volume listed for the volume group.

The second physical volume should have NO extents allocated since we just added it to
the volume group.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

16. Use the pvmove command to transfer all logical volume PEs to the second physical
volume.

# pvmove /dev/dsk/<first_pv> /dev/dsk/<second_pv>

17. Verify the extents transferred.

# pvdisplay –v /dev/dsk/cXtYdZ | more ç first physical


volume

# pvdisplay -v /dev/dsk/cXtYdZ | more ç second physical


volume

Summary

The pvmove command provides a way of moving a logical volume’s extents and all user
data from one physical volume to another. This can be useful for combining all of a logical
volume’s extents on to one physical volume or when wanting to remove/replace an old
physical volume.

If there are NOT enough free extents to transfer to, the command should fail. You are NOT
limited to the one physical volume as long as you have enough PEs.

There are other features of pvmove such as using Physical Volume Groups for allocation.
For more information read the “man” pages for pvmove.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-10
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

TASK 3: CHANGE COMMANDS

The purpose of this task is to identify and use options to the pvchange, vgchange, and
lvchange commands.

3.1 PVCHANGE

The pvchange command is used to change characteristics associated with a physical


volume. We will identify and use some of the options to the pvchange command.

1. Perform a man pvchange. Give a brief description of each option below:

Option Description
-A

-s

-S

-x

-z (HP-UX 11.0)

2. Let’s look at what the -x option provides. When a Logical Volume is created or
extended in a volume group, the volume group will look for FREE Extents on its physical
volumes, beginning with the FIRST physical volume in the Volume Group. The process
goes sequentially from the first pv, to the second pv, etc. until it has met or fallen short of
the requirements.

Normally, if a physical volume has free extents and the volume group has enough extents
to meet a logical volume extension need, the extents will be allocated beginning with the
first pv.

Sometimes an administrator may want to preserve the extents on a specific physical


volume or force a logical volume’s extents to a physical volume other than the first,

This can be accomplished in a couple of ways. One way is with the pvchange –x
option.

Disable extent allocation for your vg01 physical volumes.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-11
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

# pvchange –x n /dev/dsk/<first_pv>
# pvchange –x n /dev/dsk/<second_pv>

3. Display the physical volume attributes. Observe the value for “Allocatable” and compare
it with step 2.

# pvdisplay /dev/dsk/<first_pv>

Allocatable: __________

# pvdisplay /dev/dsk/<second_pv>

Allocatable: __________

4. Try to extend /dev/vg01/lvol1.

# lvextend –L 200 /dev/vg01/lvol1

Note the error messages:

5. Turn extensibility back on. Display and record the “Allocatable” field.

# pvchange –x y /dev/dsk/<first_pv>
# pvdisplay /dev/dsk/<first_pv>

Allocatable: __________

6. Try to extend /dev/vg01/lvol1.

# lvextend –L 200 /dev/vg01/lvol1

Did it extend this time? _____________

7. The -A option is available with the change commands pvchange and lvchange. It is
also available on other LVM commands which modify a volume group’s metadata (e.g.
lvcreate, lvextend).

HP-UX 10.X and above AUTOMATICALLY backs up the LVM metadata structures
whenever a modification is made to a volume group. If, for some reason, you or a System
Administrator DOES NOT want to AUTOMATICALLY backup a volume group’s
metadata when a change is made, the -A n option can be included in the command.

The -A n option is based on per command. This means you have to include the -A n
option for every command performed which would modify the volume group’s metadata.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-12
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

Observe the timestamp of volume group vg01 metadata backup file.

# ll /etc/lvmconf/vg01.conf

Timestamp: ________________________________________

Change the timeout value for your “first_pv” to 30 seconds. Do NOT automatically
backup the volume group.

# pvchange –A n –t 30 /dev/dsk/<first_pv>

Did you receive any volume group backup messages? _________

Observe the timestamp of volume group vg01 metadata backup file.

# ll /etc/lvmconf/vg01.conf

Timestamp: ________________________________________

Did it change? ________

If you were to restore (vgcfgrestore) the LVM metadata to this physical volume, the
just modified information would NOT be included in the backup data file.

Change the timeout value for your primary link to 30 seconds. Automatically backup the
volume group.

# pvchange –t 30 /dev/dsk/<first_pv>

Did you receive any volume group backup messages? _________

Observe the timestamp of volume group vg01 metadata backup file.

# ll /etc/lvmconf/vg01.conf

Timestamp: ________________________________________

Did it change? ________

Now if you were to restore (vgcfgrestore) the LVM metadata to this physical
volume, the just modified information would be included in the backup data file.

Summary

The pvchange command is used to modify physical volume characteristics. This includes
the extent allocation of a physical volume and parameters for pvlinks (IO timeout, switch
PRIMARY). Additionally, the option to turn off automatic volume group backup is available.
If you have an HP-UX 11.0 system with mirroring, there is an option to create a spare
physical volume.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-13
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

3.2 VGCHANGE

The vgchange command is used to change characteristics associated with a volume group.
We have already seen from day 1 how to activate a volume group (vgchange –a y) or
deactivate a volume group (vgchange –a n).

1. Perform a man vgchange. Give a brief description of each option below:

Option Description
-a

-l

-p

-q

-s

-P

-c

-S

Notice the -a option has several variables to it. These include:


• y: activate volume group as read-write (non-cluster)
• n: deactivate volume group
• e: activate volume group for read-write exclusive access (cluster)
• s: activate volume group for read-write sharable access (cluster)
• r: activate volume group for READ only access

NOTE The following options require the MC/ServiceGuard or MC/Lockmanager


products. They will not be covered in detail here but are covered in the
appropriate MC/ServiceGuard or MC/Lockmanager courses. A brief
description has been provided

The -a e and –a s options are for the MC/ServiceGuard and MC/Lockmanager


products. These are available to ensure data integrity when more than one system is
connected to the same device.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-14
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

The -a r option allows a second system to activate the volume group for backup
purposes.

The -c y and -c n options are for clusterizing and declusterizing a volume group
(MC/ServiceGuard). A clustered volume group is on a shared bus between two or more
systems but can only be activated by one system at any given time. This works with the -
a e option.

The -S option is for creating a sharable volume group (MC/Lockmanager). A sharable


volume group is one which can be activated on more than one system at the same time.
This works with the -a s option.

2. If for some reason an administrator wants to activate a volume group but no the logical
volumes, the -l option is available.

Display vg01 information.

# vgdisplay vg01

--- Volume groups ---


VG Name ___________
VG Write Access ___________
VG Status ___________
Max LV ___________
Cur LV ___________
Open LV ___________
Max PV ___________
Cur PV ___________
Act PV ___________
Max PE per PV ___________
VGDA ___________
PE Size (Mbytes) ___________
Total PE ___________
Alloc PE ___________
Free PE ___________
Total PVG ___________
Total Spare PVs ___________
Total Spare PVs in use ___________

Try mounting /dev/vg01/lvol1 and /dev/vg01/lvol2.

# mount /dev/vg01/lvol1 /vg01_lvol1


# mount /dev/vg01/lvol2 /vg01_lvol2

Did it work? ________

3. Disable the vg01 volume group.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-15
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

# umount /vg01_lvol1
# umount /vg01_lvol2
# vgchange –a n vg01

4. Enable vg01 but disable logical volume access.

# vgchange –a y –l vg01

Note the mode the volume group was activated in: __________________

5. Display vg01 information.

# vgdisplay vg01

Was there any difference between this step and step 2? _______

6. Try mounting /dev/vg01/lvol1.

# mount /dev/vg01/lvol1 /vg01_lvol1

Did it work? ________

The volume group has been activated in Standard Mode, which means no logical volumes
are available. When trying to mount the logical volume, we received and error stating
there is no such device. vgdisplay did not show any difference in Current and Open
LVs. An lvchange –a y is SUPPOSED to open up a logical volume.

7. Disable the volume group and reactivate it normally.

# vgchange –a n vg01

# vgchange –a y vg01

8. Try mounting /dev/vg01/lvol1.

# mount /dev/vg01/lvol1 /vg01_lvol1

Did it work? ________

Deactivating and then reactivating the volume group in a normal way will also activate or
OPEN the logical volumes.

9. Display vg01 information.

# vgdisplay vg01

How many Cur PV do you have? ________

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-16
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

How many Act PV do you have? ________

10. Deactivate vg01.

# umount /vg01_lvol1
# vgchange –a n vg01

11. Destroy the metadata structures on the vg01 second physical volume.

# dd if=/usr/bin/ls of=/dev/rdsk/<second_pv> bs=2k \


count=50

12. Try to activate vg01.

# vgchange –a y vg01

Did the volume group activate? _______

Note the error messages:

You should have a message about quorum. Quorum is greater than (>) 50%. In this case,
you MUST have more than 50% of all physical volumes in the volume group available in
order to activate the volume group. Since there are only two physical volumes and one of
them we destroyed the metadata, only one physical volume is recognized. This is NOT
quorum.

13. We can override quorum by using the –q n option. Activate the volume group using the
override quorum option.

# vgchange –a y –q n vg01

Note the output Warning message: _______________________________


____________________________________________________________

The -q n option will override the NO QUORUM for a volume group. Care must be
taken when activating a volume group without quorum. Any logical volumes contained
on the BAD or MISSING physical volume will NOT be accessible.

If we had quorum but we DO NOT want a volume group to activate unless ALL physical
volumes are available, the -p option is used. If you have enough physical volumes on
your system, you can add a third physical volume to vg01 and use the vgchange –a
y –p vg01 command to try and activate the volume group.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-17
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

The -s and –P options deal with LVM mirroring. LVM mirroring (MIRROR/UX) is a
product the customer must buy. Mirroring is NOT covered in this course.

14. Fix the damaged physical volume.

# vgchange –a n vg01
# vgcfgrestore –n vg01 /dev/rdsk/<second_pv>
# vgchange –a y vg01

Summary

The vgchange command effects the behavior of a volume group. A volume group MUST
be activated before any of it’s Logical Volumes can be accessed. The configuration will
determine what variable to use to the -a option.

Quorum MUST be met in order to activate a volume group. This can be overridden with the
-q n option.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-18
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

3.3 LVCHANGE

The logical volume is how the user data is accessed. There are times when the logical
volume’s attributes will need to be changed. This is accomplished with the lvchange
command.

1. Perform a man lvchange. Give a brief description of each option below:

Option Description
-a

-A

-c

-C

-d

-D

-M

-p

-r

-s

-t

The -A n option is the same as for pvchange, lvcreate, lvextend, vgcreate,


and vgextend. It turns OFF the AUTOMATIC volume group backup.

The -c and -M options deal with MirrorDisk/UX. Those will not be covered here.

2. Normally when a volume group is activated, all logical volumes will be OPENed.
Remember from the “vgchange” lab, it is possible to activate a volume group with the
logical volumes CLOSEd.

In theory, if a volume group is activated without the logical volumes OPENed, you
should be able to use :

lvchange –a y /dev/<vg_name>/<lv_name>

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-19
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

to OPEN the logical volume; however, when this lab was written, this option DID NOT
work.

We will activate the volume group with logical volumes OPENed and then use the
lvchange command to CLOSE the logical volume.

# vgchange –a y vg01 ç (if not already activated)

3. Display information for /dev/vg01/lvol1.

# lvdisplay /dev/vg01/lvol1

Record the information below:

--- Logical volumes ---


LV Name ___________________
VG Name ___________________
LV Permission ___________________
LV Status ___________________
Mirror copies ___________________
Consistency Recovery ___________________
Schedule ___________________
LV Size (Mbytes) ___________________
Current LE ___________________
Allocated PE ___________________
Stripes ___________________
Stripe Size (Kbytes) ___________________
Bad block ___________________
Allocation ___________________
IO Timeout (Seconds) ___________________

4. Disable (CLOSE) logical volume /dev/vg01/lvol1.

# lvchange –a n /dev/vg01/lvol1

5. Display information for /dev/vg01/lvol1.

# lvdisplay /dev/vg01/lvol1

Compare the “LV Status” to step 3. _____________________

6. Try to mount /dev/vg01/lvol1.

# mount /dev/vg01/lvol1 /vg01_lvol1

Did it mount? _______

Record the message:

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-20
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

7. Activate (OPEN) the logical volume.

# lvchange –a y /dev/vg01/lvol1

8. Try to mount /dev/vg01/lvol1.

# mount /dev/vg01/lvol1 /vg01_lvol1

Did it mount? _______

If a logical volume is CLOSEd, the lvchange command can be used to OPEN it. Keep
in mind, if the logical volume was not opened by the volume group activation
(vgchange –a y –l), lvchange –a y may not work.

9. The next option to discuss is -C (contiguous). Contiguous means a logical volume will
start at one point and end with no breaks in between This means a logical volume can
only be contained on one (1) physical volume.

Root, swap, dump, and boot logical volumes MUST be configured as contiguous. The
default extent allocation policy is NON-CONTIGUOUS for the lvcreate command.

From step 3, check the “Allocation” field contents displayed from lvdisplay. This
should say “strict” which is for strict mirroring.

Use the lvchange command to make this logical volume contiguous.

# lvchange –C y /dev/vg01/lvol2

10. Display the logical volume information. Observe the “Allocation” field contents and the
physical volume it is located on.

# lvdisplay /dev/vg01/lvol2

Allocation contents: ______________________________

Physical Volume: ________________________________

The “Allocation” should now indicate “strict/contiguous”.

11. Create a new logical volume of 20 Mbytes in volume group vg01 on the physical
volume for lvol2.

# lvcreate vg01
# lvextend –L 20 /dev/vg01/lvol3 \
/dev/dsk/<pv_for_lvol2>

12. Try to extend /dev/vg01/lvol2.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-21
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

# lvextend –L 300 /dev/vg01/lvol2

Did it work? ______

13. Remove the third logical volume.

# lvremove /dev/vg01/lvol3

14. Try extending /dev/vg01/lvol1 again.

# lvextend –L 300 /dev/vg01/lvol2

Did it work? ______

As you can see, a “contiguous” logical volume cannot be broken into different segments.

15. Now we will remove the “contiguous” allocation.

# lvchange –C n /dev/vg01/lvol2

16. Verify the change.

# lvdisplay /dev/vg01/lvol2

Allocation contents: ______________________________

17. The -r option (Bad Block Pool relocation) indicates whether a logical volume is allowed
to use LVM’s Bad Block Pool if a spare request is made of the LVM driver. Since sparing
in the LVM sense is NOT contiguous, Bad Block Pool relocation for ROOT, SWAP,
DUMP, and BOOT logical volumes MUST be turned off.

Observe the “Bad Block” field for lvol2 from lvdisplay.

# lvdisplay /dev/vg01/lvol2

“Bad Block” field: _______________

Use the lvchange command to disable “Bad Block relocation”.

# lvchange –r n /dev/vg01/lvol2

18. Verify the change.

# lvdisplay /dev/vg01/lvol2

“Bad Block” field: ________

At this point, Bad Block relocation has been turned “off”; however, if a bad block is
detected LVM will make note of it in the Bad Block directory but not relocate data.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-22
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

The -N option turns off both data relocation AND Bad Block Directory records.

19. Turn off ALL Bad Block information. Display and observe the “Bad

# lvchange –r N /dev/vg01/lvol2
# lvdisplay /dev/vg01/lvol2

Bad Block field: ________

20. Turn Bad Block relocation on.

# lvchange –r y /dev/vg01/lvol2

Verify the change.

# lvdisplay /dev/vg01/lvol2

Bad Block: ________

21. Now change the logical volume to be Contiguous without Bad Block data relocation.

# lvchange –C y –r n /dev/vg01/lvol2

Verify the changes.

# lvdisplay /dev/vg01/lvol2

Bad Block: ________

Allocation: ________

The other options to lvchange, except for -p and -t, deal with mirrors or striped
logical volumes. The -p option will make a logical volume read/write or read only. The
-t option will change timeout values. This option should NOT be used in most cases.

Summary

The lvchange command changes characteristics specific to a logical volume. These


include allocation, mirror, striping, bad block, permission policies.

A Root, Swap, Dump, and Boot logical volume MUST be Contiguous AND have Bad Block
Relocation turned off.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-23
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

TASK 4: USING SAM TO DETERMINE AND SET M AXIMUM VOLUME GROUPS

The purpose of this task is to use SAM, System Application Manager, to determine and
modify the maximum number of volume groups a system can have.

1. From the command line, start “SAM”.

# sam

2. Once “sam” is up and running, under SAM AREAS, select “Kernel Configu

3. Under “Kernel Configuration”, select “Configurable Parameters”.

4. Scroll down the “Configurable Parameters” until you see “maxvgs”. Determine the
“Currnet Value”.

Current Value: ________

This is the maximum allowed number of volume groups currently configured for this
system.

The default maximum volume groups (maxvgs) is “10”.

5. Select (highlight) “maxvgs” and select “Modify Configurable Parameter” from Actions.
Change the value for “maxvgs” to 20.

6. Select “Exit” from File and select “Create a New Ker

7. When prompted, move the kernel into place and reboot the machine.

8. Once the machine has rebooted, verify the value has changed.

Summary

The kernel contains a Configurable Parameter, maxvgs, which defines the maximum number
of volume groups allowed on the system. The default value is “10” and the absolute
maximum is “255”. This can be viewed and changed by running SAM.

CAUTION Since maxvgs uses memory resources, configuring an excessive amount of


maximum volume groups (maxvgs) could cause performance issues,
particularly on small memory systems.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-24
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

TASK 5: DO IT ON YOUR OWN

The purpose of the next few steps is to try performing the previous tasks on your own without
giving you the commands.

1. Export volume group vg01. Do NOT use a mapfile

2. Import volume group vg01.

3. Export volume group vg01. Create a mapfile.

4. Edit the mapfile and change the names of the logical volumes.

5. Import volume group vg01 using the mapfile.

6. Create a logical volume named lv2move in the root volume group with a size of
8Mbytes.

7. Add a second physical volume to the root volume group.

8. Use the pvmove command to move lv2move to the new physical volume in the root
volume group.

9. Use the lvchange command to make lv2move contiguous with NO bad block
relocation.

10. Use pvchange to disable extent allocation on the second physical volume added to the
root volume group.

11. Deactivate vg01.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-25
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

12. Reactivate vg01 using the option to ignore quorum (even if quorum is met)..

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-26
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

TASK 6: PVLINKS

The purpose of this task is to determine and configure pvlinks and identify potential import
problems with pvlinks.

NOTE This lab requires an external disk connected to two (2) interface cards on the
same system. If using a “NIKE” drive, ensure AUTOTRESPASS is on and at
least one (1) LUN is configured.

1. Identify an external physical volume connected by two (2) separate I/O interface paths
and the special files. List the lower Hardware Path Device as Primary (P) and the higher
Hardware Path Device as Alternate (A) in the table below.

NOTE To find out what is available for your system, consult your instructor OR read
the “/FAQ” document, if available.

# ioscan -fnC disk

Hardware Path Block Special File Character Special File

2. Create a new volume group with only one link to the physical volume. Use the first
(lower hardware path) device found in step 1. This will be the primary path to the device.
Substitute the “?” in the mknod command with the next available volume group number
on your system (e.g. 01, 02, 03 , …)

# pvcreate -f /dev/rdsk/cXtYdZ ç primary path to physical


volume
# mkdir /dev/vgpvlink
# mknod /dev/vgpvlink/group c 64 0x0?0000
# vgcreate vgpvlink /dev/dsk/cXtYdZ ç primary path to
physical volume

3. Create a logical volume named “link1” for a size of 100 Mbytes.

# lvcreate -L 100 -n link1 vgpvlink

4. Create a mount_point_directory for /dev/vgpvlink/link1.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

# mkdir /vgpvlink1

5. Create a filesystem on /dev/vgpvlink/link1.

# newfs -F vxfs /dev/vgpvlink/rlink1

6. Mount the filesystem.

# mount /dev/vgpvlink/link1 /vgpvlink1

7. Display the volume group information.

# vgdisplay -v vgpvlink

Record the Physical Volume Information:


--- Physical volumes ---
PV Name _______________
PV Status _______________
Total PE _________
Free PE _________

8. Create a physical volume link (PVLINK) to the device.

# vgextend vgpvlink /dev/dsk/cXtYdZ çalternate path

9. Display the volume group information again and compare it to what was in step 7.

# vgdisplay -v vgpvlink

Note any differences from step 7.

To create a pvlink, after properly connecting and configuring the hardware, we create our
volume group and primary link in the same manner as any other volume group

When we extend the volume group to include the second path to the same device, LVM
reads the physical volume ID (PVID) on the physical volume and sees it already belongs
to this volume group. It does NOT increase the number of physical extents available to
the volume group but creates an alternate link to the device.

The primary link is the first physical volume for the pvlink listed in /etc/lvmtab. Be
aware, there could be non-pvlink devices listed for this volume group before a primary
link.

10. Display all information for the logical volume.

# lvdisplay -v /dev/vgpvlink/link1

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

How many physical volume paths are listed? __________

How many would you expect? ____________

Which physical volume is listed, PRIMARY or SECONDARY? ___________________

vgdisplay displays and knows that there is an alternate path to the logical volume. It
lists the primary and the alternate device paths; however, lvdisplay only displays the
current device path.

11. We will now use the pvchange command to switch the primary and alternate links.

# pvchange –s /dev/dsk/<alternate_path>

This should now make the alternate link the primary

12. Some problems occur when the bus becomes extremely busy. This is not a hardware
failure but will result in SCSI bus timeouts.

The default timeout to a link varies between device drivers; however, for most, the
default is 30 seconds. The system checks the primary link every 5 seconds.

What can, and does, occur on some devices (e.g. NIKE) is a ping pong effect where the
bus becomes extremely busy. The bus will eventually reset, however, at the same time an
AUTOTRESPASS occurs, the primary will switch to the alternate link. It takes 30
seconds for an AUTOTRESPASS to occur. The system is checking the primary every 5
seconds.

When the system sees the primary has been reset, it will use the primary link but then we
can get in the same situation as we originally had where the bus is very busy.

What ultimately is occurring is the system is “ping-ponging” between the primary and
alternate links which results in the logs being filled and no data throughput for the
device.

A solution to this problem is to increase the timeout parameter and turn OFF the link
switch return. This is accomplished through the -t and -S options to pvchange
command.

NOTE In most cases, it is NOT recommended that the link switch return be disabled.

Display the current values for the primary link.


# pvdisplay –v /dev/dsk/<primary_path> | more

Record the following:


--- Physical volumes ---

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

PV Name _______________
PV Name _______________
VG Name _______________
PV Status _______________
Allocatable _______________
VGDA _______________
Cur LV _______________
PE Size (Mbytes) _______________
Total PE _______________
Free PE _______________
Allocated PE _______________
Stale PE _______________
IO Timeout (Seconds) _______________

--- Distribution of physical volume ---


LV Name LE of LV PE for LV
___________________ __ __

13. The IO TIMEOUT value should say default at this time. This is typically 30 seconds for
most drivers. Notice that nothing is displayed in terms of SWITCHING.

Change the timeout value to sixty (60) seconds and turn OFF the link switch return.

# pvchange -t 60 -S n /dev/dsk/<primary_path>
WARNING The lvchange command also has a -t (timeout option). This SHOULD
NOT be used for it can cause system or I/O hangs.

14. Observe the characteristics for the primary link.

# pvdisplay –v /dev/dsk/<primary_path> | more

What does the IO Timeout say now? __________________

Is there any information about the “switching” being disabled? ______

With the switching disabled, once the system switches to the alternate, it will NOT switch
back to the primary.

When switching back to the primary which has AUTO SWITCH disabled, the
pvchange command must be issued to switch it back (pvchange -s
/dev/dsk/<primary_path>). By increasing the timeout value for the physical
volume and disabling the switch back to primary feature, the ping-pong effect which can
occur should be eliminated.

If the primary link were to fail, LVM will reroute the I/O to the alternate link.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

15. Verify the change for the volume group.

# vgdisplay -v vgpvlink

Observe syslog.log.

# tail /var/adm/syslog/syslog.log

Are there messages indicating a change?

16. List the contents of /etc/lvmtab.

# strings /etc/lvmtab

Which device path is listed first, the primary or alternate, for vgpvlink?

17. Reboot the computer.


# reboot –t now

Once the system has rebooted, display the volume group information for vgpvlink.

# vgdisplay -v vgpvlink

What is the primary link to link1?

Is this the same as what you changed in step 11?

Although we made the alternate link in step 11 the primary, when we rebooted, the first
device path listed in /etc/lvmtab became the primary.

18. Let’s make the alternate link the primary permanently. We will do this by reducing the
volume group to remove the primary link.

# vgreduce vgpvlink /dev/dsk/<primary_path>

19. Display all the volume group information. Observe the physical volumes area.

#vgdisplay –v vgpvlink

20. Add the original PRIMARY path to the volume group, making it the SECONDARY
path.

# vgextend vgpvlink /dev/dsk/cXtYdZ ç old PRIMARY


path

21. Verify that the first listed device in the volume group is the primary.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

# vgdisplay -v vgpvlink

22. We should now have the original SECONDARY path as the PRIMARY and the original
PRIMARY as SECONDARY. This should have the HIGHER hardware address as
PRIMARY and the lower hardware address SECONDARY.

Assuming this is properly set on your system, we will export this volume group and then
import it and observe the order in which the disks are brought in.

We are going to use an option (-s) to vgexport and vgimport which will seek out
all physical volumes attached to this volume group. The process from vgexport will
write the VGID (Volume Group ID) into the mapfile.

# strings /etc/lvmtab

Which path is listed first? _______________________________

# vgchange –a n vgpvlink
# vgexport –s –m /tmp/vgpvlink.map vgpvlink
# mkdir /dev/vgpvlink
# mknod /dev/vgpvlink/group c 64 0x0?0000
# vgimport –s –m /tmp/vgpvlink.map vgpvlink

23. Display the contents of /etc/lvmtab.

# strings /etc/lvmtab

Which path is listed first? _______________________________

Was this the PRIMARY or SECONDARY path in step 18? ___________

When importing a volume group which has pvlinks, it is VERY important to ensure
the paths are loaded accordingly. If using the -s option to vgexport and vgimport,
it is very possible the order will be reversed. This can also occur when performing a
vgscan –v.

24. To fix this problem, we will remove the first physical volume listed and then bring it back
in.

# vgreduce vgpvlink /dev/dsk/<first_pv_listed>


# vgextend vgpvlink /dev/dsk/<pv_just_removed>

Summary

pvlinks is a redundant I/O hardware path solution to a physical volume (two I/O cards
connected to the same physical volume). There will always be a PRIMARY link. This is the

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

first physical volume, in the pvlinks configuration, listed in the /etc/lvmtab file for a
volume group.

It is possible to temporarily change this by use of the pvchange –s command. To make a


permanent change, the physical volume must be removed from the volume group by the
vgreduce command and then added back in to the volume group by the vgextend
command.

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 20: Lab

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


20.doc HSD Field Development
20-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 21: Root VG Structures

Module 21

Root VG Structures

Date 13-Nov-00 CES2-DISTANCELVM


21.doc HSD Field Development
21-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 21: Root VG Structures

LVM Boot Disk Layout

LIF Volume Header


Boot
PVRA Data
BDRA
Reserved
LIF Volume
Area
V G RA

User

Data

Area

(Bad Block Pool)

Boot Data Reserved Area (BDRA) contains:


• Location and sizes of disks in root Volume group
• Knows locations of root, primary swap and dump logical volumes
• Kernel uses this information to configure root and primary swap at boot-up

Maintained using lvlnboot and lvrmboot

Bad Block relocation is not supported on the root and primary swap logical volumes.

Date 13-Nov-00 CES2-DISTANCELVM


21.doc HSD Field Development
21-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 21: Root VG Structures

To boot the system, the kernel activates the volume group to which the system’s root logical
volume belongs. The location of the root logical volume is stored in the boot data reserved
area (BDRA). The boot data reserved area contains the locations and sizes of LVM disks in
the root volume group and other vital information to configure the root, primary swap, and
dump logical volumes, and to mount the root file system.

The BDRA contains the following records about the system’s root logical volumes:
timestamp (indicating when the BDRA was last written), checksum for validating data, root
volume group ID, the number of LVM disks in the root volume group, a list of the hardware
addresses of the LVM disks in the root volume group, indices into that list for finding root,
swap, and dump, and information needed to select the correct logical volumes for root,
primary swap, and dumps.

Date 13-Nov-00 CES2-DISTANCELVM


21.doc HSD Field Development
21-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 21: Root VG Structures

LVM Boot Disk LIF Volume

LIF Volume Header


PVRA
Contains:
BDRA ISL
LIF Volume HPUX
V G RA AUTO
User
LABEL
Data

Area

(Bad Block Pool)

Boot area in two parts: LIF header and LIF volume

LIF Header contains pointers to files in LIF volume

LIF volume contains:


• ISL - Initial System Loader
• HPUX - kernel loader
• AUTO - contains the autoboot string
• LABEL - used by HPUX to locate the root logical volume during a normal boot. Updated
by lvlnboot and lvrmboot commands.

Created only by the mkboot command

CAUTION dd and lifcp should not be used to create a Boot Area. LVM information
(PVRA,BDRA) will be overwritten!

Date 13-Nov-00 CES2-DISTANCELVM


21.doc HSD Field Development
21-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 21: Root VG Structures

BOOT DATA RESERVED AREA

The BDRA contains a primary and secondary boot data record that describes the root, swap,
and dump volumes to be used during boot. Also in the BDRA is a primary and secondary
PVol (Physical Volumes) list.

Date 13-Nov-00 CES2-DISTANCELVM


21.doc HSD Field Development
21-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 21: Root VG Structures

LVM Physical Disk Layout

LIF Header Reserved


Volume Group PVRA
Descriptor Area BDRA LVM Record
Volume Group
Status Area
LIF Volume Bad BlockDir
VGRA
Mirror Reserved
Consistency User
Records Duplicate
LVM Record
Duplicate Info.
Data Duplicate
Bad BlockDir

Area
Bad Block Pool

LVM DISK LAYOUT


The slide shows the general layout of a LVM disk. The disk will have a LIF header on it that
is 8 Kb in size. The LVM specific data begins immediately following the LIF header.
The first LVM data on disk is the Physical Volume Reserved Area (PVRA). From a user
viewpoint the PVRA is normally discussed as an individual structure. The kernel, however,
does not have a PVRA-specific data type and instead we discuss the PVRA as the area of
disk that contains:
• information describing this particular physical volume
• pointers to other LVM structures on the disk.
• the Bad Block Directory for the physical volume
Following the PVRA is the Boot Data Reserved Area (BDRA). The BDRA contains
information necessary to locate the root, swap, and dump volumes during boot.
The LIF Data Area contains LIF (LogicalInterchange Format) utilities used by HP PA RISC
machines. For HP-UX systems, these include:
ISL (Initial System Loader)
hpux loader utility
Autofile
Label file
Offline Diagnostics

Date 13-Nov-00 CES2-DISTANCELVM


21.doc HSD Field Development
21-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 21: Root VG Structures

lvlnboot
lvlnboot -R
LIF Volume Header 1
PVRA

BDRA LABEL
LIF Volume LABEL 2
VGRA BDRA
/

stand
1
vmunix
lvlnboot -r
-s
(Bad Block Pool)
-d
-b

LABEL and BDRA are updated and created by lvlnboot

• -r updates root logical volume definition.


• -b updates boot logical volume definition.
• -s updates swap logical volume definition.
• -d updates dump logical volume definition.

NOTE All the above options update the BDRA and then the LABEL file

• -R updates LABEL files on all boot disks in the volume group specified with the contents
of the BDRA.

NOTE This option makes no changes to the contents of the BDRA: if it’s empty, it
will still be empty until one of the -r, -b, -s, or -d options are used.

Date 13-Nov-00 CES2-DISTANCELVM


21.doc HSD Field Development
21-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 21: Root VG Structures

/stand/bootconf

• /stand/bootconf
– contains information regarding location of boot
LIF

/stand/bootconf

This file contains the address and disk layout type of the system's boot devices or lif
volumes. It is used by the Software Distributor and HP-UX kernel control scripts (fileset
OS-Core.KERN-RUN) to determine how and where to update the initial boot loader.

Normally the kernel's checkinstall script queries the system's hardware and creates the file. In
rare cases when either the system configuration cannot be automatically determined or
additional and/or alternate boot devices should be automatically updated, the administrator
must edit the /stand/bootconf file manually.

Examples of typical entries:

l /dev/dsk/c0t6d0 # standard lvm disk type

p /dev/dsk/0s0 # hard partitioned boot disk

l /dev/dsk/c0t6d0 # standard lvm disk type mirrored

l /dev/dsk/c0t5d0 # standard lvm disk type mirrored

Date 13-Nov-00 CES2-DISTANCELVM


21.doc HSD Field Development
21-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 21: Root VG Structures

/stand/rootconf

• /stand/rootconf
– used with maintenance mode boot if root and
stand are in different file systems
– recreated during normal mode boot
– recreated with “lvlnboot -c”

This file is used during a LVM maintenance mode boot (hpux -lm) on systems running
HP-UX 10.20 and above when root ( / ) and /stand are in different file systems.

If this file is missing or corrupt, the system will not boot in LVM maintenance mode. This
file can be recreated with the -c option to the lvlnboot command or by rebooting the
system with a “normal mode” boot.

Date 13-Nov-00 CES2-DISTANCELVM


21.doc HSD Field Development
21-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

Module 22

Recovery

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

Booting a Corrupted LVM


Bootable Disk
If the LABEL file is corrupt, the location of the "root logical volume"
is unknown. LVM architecture requires the "root logical volume"
to be placed at a certain location on the boot disk. Using this known
offset allows us to boot in maintenance mode.
To boot in "maintenance mode", type the following from the ISL> prompt:

LIF Volume Header


PVRA ISL> hpux -lm (;0)/kernelfile

BDRA
Key
LIF Volume LABEL -lm Overrides information in BDRA and LABEL file.
Boot in LVM "maintenance mode".
VGRA i.e. - system is in single-user mode
lvol1 / - no volume groups are activated
stand - must be rebooted

vmunix

(Bad Block Pool)

ISL> hpux -lm (;0)/stand/vmunix

Uses a known offset to find start of lvol1.

Does not use information in BDRA and LABEL file.

Boots in LVM “maintenance mode”.


• system is in single user mode
• no volume groups are activated
• must be rebooted when finished.

Maintenance mode boot provides a means, during system installation, or when critical LVM
data structures have been lost, to boot from an LVM disk without the LVM data structures.

The system can be booted in maintenance mode using the -lm option to the hpux
command at the ISL> prompt.

This causes the system to boot to single-user mode without primary swap, dump, or LVM to
access the root file system.

NOTE The system must not be brought to multi-user mode (that is, init 2) when in
LVM maintenance mode. Corruption of the root file system might result.

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

Booting without quorum for vg00

vg00

PV1 PV2

Should PV2 fail then quorum isn't present

ISL> hpux -lq (;0)/stand/vmunix

Boots system and activates root volume group (e.g. vg00) without having quorum. The state
of the system after the boot operation completes depends on what file systems and files are on
the missing disk(s).

Often used to boot a system that has lost one disk out of a two disk vg00 where all of the
logical volumes on the boot disk has been mirrored to the second disk. In this instance, once
the system has been booted without quorum, the system is fully operational.

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

LVM Data Structure Backup


# vgcfgbackup vg01

vg01.conf

PVRA PVRA PVRA

VGRA VGRA VGRA

vg01
vg01 vg01
lvol 1 lvol 1
lvol 1

vg01 vg01 vg01


lvol 2 lvol 2 lvol 2

Bad Block Pool Bad Block Pool Bad Block Pool

PV1 PV2 PV3

Only LVM data is backed up by vgcfgbackup.

• vgcfgbackup automatically runs whenever:


– adding or removing disks to/from a volume group
– changing boot disks in a volume group
– creating or removing logical volumes and volume groups
– extending or reducing logical volumes and volume groups

• vgcfgbackup saves the configuration information in /etc/lvmconf/vgXX.conf

• The vgcfgbackup command backs up volume-group configuration information into


binary files, one file per volume group.

• vgcfgbackup backs up the LVM record of each LVM disk, the BDRA record for each
bootable LVM disk, one copy each of the current VGDA and VGSA data structures, and
LIF LABEL files.

• vgcfgbackup does not back up LIF header and files, nor bad block directory.

• vgcfgbackup can be turned off with the -A n option

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

LVM Data Structure Restore


# vgcfgrestore –n vg01 /dev/rdsk/c0t2d0

PVRA PVRA PVRA


VGRA VGRA VGRA

vg01 vg01
vg01
lvol 1 lvol 1
lvol 1

vg01 vg01 vg01


lvol 2 lvol 2 lvol 2

Bad Block Pool Bad Block Pool Bad Block Pool

PV1 PV2 PV3

PV2 crashed

Only LVM data is restored by vgcfgrestore.

Volume groups to be restored must be made not available first.


• vgchange -a n vg01

To restore the LVM data structures on volume group vg01:

# vgchange -a n vg01
# vgcfgrestore -n vg01 /dev/rdsk/c0t2d0
# vgchange -a y vg01

Then customer data has to be restored from backup to all the affected logical volumes if it
could not be saved.

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

Recovering without a LVM Backup

PV1 PV2 PV3


lvol1 lvol3 lvol5

lvol2 lvol4 lvol6

Under certain circumstances, you can recreate a volume group and save the data without
having a vgcfgbackup. The exact circumstances under which you can do this are:
1. The LVM structures are corrupt or incorrect, but the user data remains intact. For
example, if you inadvertently used the pvcreate -f command on the disk, you would
create this situation.
2. You know the exact layout of the volume group. This means that you know the exact size
and location of each logical volume in the volume group. Typically, this information
would need to have been recorded before the corruption occurred by a tool such as
LVMcollect.

This is a summary of the procedure for recreating the volume group.


1. vgexport the volume group.
2. Use the mkdir and mknod commands to recreate the volume group directory and
group files.
3. pvcreate the disks in the volume group.
4. vgcreate and vgextend the volume group to include all disks.
5. Use lvcreate to recreate each of the logical volumes in the volume group. Each
logical volume must be recreated with the exact same size, name and location as
before the corruption. This means the extent mapping for each logical volume now
points to the same extent(s) on the disk(s) as the original logical volumes and the data
is now available.
6. Mount, verify, and use the file systems in the logical volumes that were repaired.
Date 13-Nov-00 CES2-DISTANCELVM
22.doc HSD Field Development
22-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

Recovery with vgimport (step 1)


# vgscan

Root
Vg??
lvol1
Lvol?
lvol2
Lvol?
lvol3

Using vgimport to recover a lost volume group involves two steps:

1) vgscan

This scans the disks and suggests which groups of disks should be vgimport’ed.

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

Recovery with vgimport (step 2)


# vgimport vg01 /dev/dsk/c0t2d0

Root
Vg01
lvol1
lvol2 Lvol1

lvol3 Lvol2

2) vgimport

The group file must be created first with mknod before doing this.

The vgimport command needs the desired name of the volume group and the device files,
as found by vgscan, of all its Physical Volumes. vgimport will then create all the logical
volume device files required and update /etc/lvmtab.

NOTE If the logical volumes had names rather than lvol1,lvol2 etc., these can simply
be renamed using mv. Alternatively, if a mapfile had been available then it
could be specified with vgimport -m .

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

USING VGIMPORT AND VGEXPORT EFFECTIVELY

The following is taken from an article in PA-NEWS number198.

A system recovery which involves a re-install, many VG’s, and re-ordered Instance’s (thanks
to auto-config) can be tedious.

The utility most people will try to use first is vgscan.

However, vgscan only works if:

• All LVM device files for the VG exist


• All the information contained in those device files is scrupulously correct (minor
numbers, etc).
• The /etc/lvmtab file contains no mention of the VG
• The /etc/lvmtab file does not have the PV device file Instance’s assigned to another
VG

If the LVM information is incorrect or unknown then it is better to use vgexport and
vgimport than to try to guess it using rmsf/insf/vgscan.

Case #1 The Root disc crashes requiring a re-install

After the re-install you realize that the Instance numbers have changed.

Normally, a complete restore from a current full backup of the root disc (including device
files) followed by a reboot should clear this up, but in real life the customer never has one
when you need it. (Murphy’s Law ...)

What are your alternatives?

1) If the Volume Group files exist, you could use rmsf and insf to get all the device files
correctly assigned again and then vgscan. If the Volume Group files do not exist but
their names and minor numbers are known then they could be recreated, followed by
rmsf/insf/vgscan as above. This may work if the number of discs involved is
small.

2) You could recreate all the VG’s and their LV’s and then restore the data. It’s reliable but
not very practical.

3) You could use vgexport/vgimport. This way you don’t have to worry about
Instances; you only need to know which disks belonged to which VG.

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

Example 1

/dev/dsk/c0t5d0, c0t6d0, and c0t4d0 are all members of /dev/vg05. The system
crashes and needs to be restored . After the re-install, c0t5d0 is OK but c0t6d0 and
c0t4d0 have been reconfigured as c0t10d0 and c0t12d0 respectively. Of course, the
customer does not have a backup of /dev or /etc/lvmtab, so /dev/vg05 does not get
activated at boot time.

You have to recreate the Volume Group.

Example 1 Ans wer

As there is no backup, the problems lie in determining the volume group number and name,
and finding out which discs were originally in the configuration.

To recreate this group:

1) # vgscan
this will tell you which discs have the same associated information

2) # mkdir /dev/vg05; mknod /dev/vg05/group c 64 0x0y0000


substitute VG number for y

3) # vgimport /dev/vg05 /dev/dsk/c0t5d0 /dev/dsk/c0t10d0 /dev/dsk/c0t12d0

Example 2
Instead of a crash, the disks were moved around onto different busses to improve utilization.
The mappings after modification and boot are also identical to the example above. This time
2 of the 3 PV’s, that make up the group, have not lined up with their original instances,
quorum was not met so the VG was not activated. Restore the LVM configuration.

Example 2 Answer

In this case, the directory /dev/vg05 exists and includes its LV device files. Also, the VG
and its original PV’s are listed in /etc/lvmtab. We can use vgexport/vgimport to
include the PV’s in the VG with their new LU numbers.

1) Note the minor number of /dev/vg05/group - this will be removed by vgexport.

2) If the VG did get activated somehow (manually or quorum was met) de-activate it using
vgchange -a n /dev/vg05

3) vgexport -m /tmp/mapfile /dev/vg05

4) mkdir /dev/vg05; mknod /dev/vg05/group c 64 0xXX0000


substitute VG number for XX

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

5) vgimport /dev/vg05 /dev/dsk/c0t5d0 /dev/dsk/c0t10d0 /dev/dsk/c0t12d0

6) vgchange -a y /dev/vg05

Case #2 Could we have a spare disc ready for online replacement of a failed disc?

This is possible, but while the disc was being replaced the VG would not be available.

The better (and more costly) solution is LVM mirroring.

Example 3

/dev/dsk/c0t6d0, c0tc7d0 and c0tc8d0 are members of /dev/vg03.


/dev/dsk/c0t9d0 is an empty disc configured into the system but not a member of any
group. c0t4d0 has problems: recover using c0t9cd0.

Example 3 Answer

Replace it with c0t9d0:

1) Note the minor number of /dev/vg03/group - this will be removed by vgexport.

2) If the VG did get activated somehow (manually or quorum was met) de-activate it using
vgchange -a n /dev/vg03

3) vgexport -m /tmp/mapfile /dev/vg03

4) mkdir /dev/vg03 ; mknod /dev/vg03/group c 64 0x0y0000


substitute VG number for y

5) vgcfgrestore /dev/vg03 -o /dev/dsk/c0t4d0 /dev/dsk/c0t9d0

6) vgimport /dev/vg03 /dev/dsk/c0t6d0 /dev/dsk/c0t9d0 /dev/dsk/c0t8d0

7) vgchange -a y /dev/vg05

8) Use pvdisplay -v /dev/dsk/c0t9d0 | more to see which LV’s were affected


by the loss of this disc, and restore data to those LV’s.

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-10
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

Recreating “/etc/lvmtab”
# vgscan -v

PV1 PV2 PV3

vg01 vg01 vg01

/etc/lvmtab
vg01
/dev/dsk/c0t1d0
/dev/dsk/c0t2d0
/dev/dsk/c0t3d0

• vgscan searches all disks powered-up and on-line looking for LVM information.
• Tries to group these by volume group and recreate their entries in /etc/lvmtab if
there is sufficient information available.
• If there is not sufficient information available then it will suggest that the group of disks
should be vgimport’ed.

NOTE In order to recreate /etc/lvmtab, /etc/lvmtab has to be renamed or


removed first.

VGSCAN ISSUES AND LIMITATIONS

• If a physical volume is missing from a vg, vgscan will not know or recognize that, but
the vg has information on how many pvs are part of the vg. It will still import the vg;
you MIGHT get an error, but you will not have all of the vg. pvlinks can mask the
problem depending upon how many pvs you have.

Consider the following:


– You have a 2 pv vg (3 counting pvlinks)
– The non pvlinks pv fails
– A vcgscan is performed. vgscan pulls in the 2 paths to the pvlinks
pv. vgscan thinks it has met the 2 pvs in the vg when in reality it is only
one pv.

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-11
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

• pvlinks primary path order is reversed. vgscan, and vgimport –s, start searching
paths starting at path 0 and working its way up. If you have a primary pvlink at a
higher address than the secondary, vgscan and vgimport –s will pull in the paths in
reverse order.

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-12
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 22: Recovery

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


22.doc HSD Field Development
22-13
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

Module 23

Lab

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-0
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

SETUP FOR DAY 4 LABS

Day 4 labs require an HP9000 computer with three (3) physical volumes (including root disk)
and HP-UX 10.20 with PHKL_16751 patch. Volume group vg00 has two disks in it; the
root disk with the standard logical volumes on it and a second disk that has a logical volume
named lvol9 mounted to the /newdir directory on it. The file system must be listed in
/etc/fstab so it can be mounted at boot up. One additional unconfigured disk will be
required for task 5.

TASK 1: INVESTIGATE THE STATE OF THE SYSTEM WHEN BOOTING IN “NO


QUORUM” MODE

PURPOSE:

The purpose of this section is to demonstrate that:

1. When one disk in a two disk vg00 is unavailable, the system can be booted in quorum
mode and some system activities can be accomplished, and …

2. To recover the LVM structures on a disk, the system must be rebooted without the
volume group being activated allowing the vgcfgrestore command to be used.

A. Investigate the Current LVM Configuration

Your system should have vg00 with two disks in it. The first disk will have the eight logical
volumes and the second disk should have a logical volume named lvol9. Confirm the
situation by using the ioscan –fnC disk command to see what disks are in the
system.

Use strings /etc/lvmtab to confirm that vg00 has two disks.

Complete the table to identify the disks.

ITEM SPECIAL FILE

Root Disk /dev/dsk/c ____ t _____ d _____

Other Disk /dev/dsk/c ____ t _____ d _____

Use the pvdisplay [-v] command to determine the number and name(s) of the logical
volumes on the second disk.

What logical volume(s) are on the second disk? _________________________

If you want to, confirm the information with lvdisplay –v.

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

B. Destroy the Second Disk

Simulate a disk failure by overwriting the LVM structures on the “othe


following command. The command will overwrite the LVM structure area of the disk without
destroying the data area.

dd if=/usr/bin/ls of=/dev/rdsk/OTHER_DISK bs=2k count=50

WARNING Using the wrong special file or not including the bs or count parameters
in the above command will destroy the wrong disk or cause other problems in
this task requiring a system rebuild.

Reboot the system normally and note any startup messages. They might look like this:

LVM : Failure in attaching PV (52.4.0) to the root volume


group.
Cross device link. The disk is not a LVM disk.
LVM : Activation of root volume group failed
Quorum not present, or some physical volume(s) are
missing

-----------------------------------------------------
| |
| SYSTEM HALTING during LVM Configuration |
| |
| Could not configure root VG |
| |
-----------------------------------------------------

There might be other messages associated with this error.

The important fact is that the root volume group cannot be activated because it is missing
quorum.

C. Boot in quorum mode to continue using the root volume group

Reboot the system in no-quorum mode.

ISL> hpux -lq

Normally, this boot mode is a temporary situation used to allow the system to boot when one
disk in a two-disk vg00 is not available and the users need to complete system activities.

As your system boots up, note the errors. They should look something like:

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-2
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab
LVM : Failure in attaching PV (52.4.0) to the root volume group.
Cross device link. The disk is not a LVM disk.

AND

lvlnboot: Warning: couldn't query physical volume


“/dev/dsk/c3t4d0”:
The specified path does not correspond to physical volume
attached to this volume group

There are often many other errors.

Use the vgdisplay command to display the status of /dev/vg00.

What is the status? ______________________________________________.

Although the second disk is unavailable, vg00 is available.

Use the bdf or mount command to list the mounted file systems. You should see all but
the one on the second disk.

This task demonstrated that it is possible to boot a two-disk vg00 with one disk
inoperative and continue to have the system files available.

The recovery of the second disk will be done in the next task.

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-3
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

TASK 2: LVM MAINTENANCE MODE

PURPOSE:

The purpose of this task is to fix the corrupted second disk from the previous task. The
vgcfgrestore command will be used to replace the LVM structures, which requires that
the root group be inactive. To boot the system in that mode requires that the system be
booted in maintenance mode.

A. Reboot in maintenance mode

1. Reboot the machine and interrupt the autoboot and boot in maintenance mode.
ISL> hpux –lm

You might see error messages complaining about the inability of the system to access the
logical volume on the second disk. Also, note that the system is in Single User mode.

2. Use the vgdisplay command to view the status of vg00.

What is the status? ________________________________________________


This is the required state of the volume group for the vgcfgrestore command.

3. Use the mount command to determine what file systems are mounted.

What file system(s) are mounted? _____________________________________

In previous tasks, after doing most LVM commands (create, remove, etc.), you
received a message similar to:

Volume Group configuration for /dev/vgXX has been saved


in /etc/lvmconf/vgXX1.conf

In our case, the file /etc/lvmconf/vg00.conf has the LVM configurations for
the disk. It will be written to the disk in the next step.

4. Use the vgcfgrestore command to replace the LVM structures.

# vgcfgrestore –n /dev/vg00 /dev/rdsk/OTHER_DISK

You should receive a message similar to:


Volume Group configuration has been restored to /dev/rdsk/cXtYdZ

5. Reboot the system

Watch the startup messages. There should be no error messages relating to the missing
disk.

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-4
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

This completes the repair of the second disk in vg00.

TASK 3: BOOT IN MAINTENANCE MODE TO REPAIR THE LIF AREA OF THE


DISK.

PURPOSE:

The purpose of this section is to:


1. Simulate a corruption of the ISL area of the disk by removing a required file and reboot to
see the error messages.
2. Boot in maintenance mode, replace the LIF area of the disk, and cleanup.

A. Break the disk and reboot

1. Use the lvlnboot -v command to view the definitions (logical volumes) for the
boot, root, swap, and dump logical volumes. What are they?

boot __________________________________________

root __________________________________________

swap __________________________________________

dump __________________________________________

2. Use the lifls command to view the LIF area of the boot disk.

# lifls –l /dev/dsk/cXtYdZ

Note that there is a file named LABEL. What is the LABEL file’s type? __________

3. Use the lifrm command to remove (purge) the LABEL file.

# lifrm /dev/dsk/cXtYdZ:LABEL

4. Use the lifls command again to view the LIF area of the boot disk.

# lifls –l /dev/dsk/cXtYdZ

What is the type of the LABEL file now? _______________________________ This


will cause the system to be unbootable.

5. Try to reboot the system in multiuser mode.

# shutdown –r 0

Document the error you get by completing the following:

ISL booting hpux


Exec failed: Cannot find ___________________ or __________
Date 13-Nov-00 CES2-DISTANCELVM
23.doc HSD Field Development
23-5
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

Since the kernel file has not been removed, this error is inaccurate. If you saw this message
without knowing what had been previously done, it would appear that the entire disk might
have been erased requiring a reload.

B. Boot in Maintenance Mode and Repair

6. Boot in maintenance mode

ISL> hpux -lm

Being able to boot in this mode is required to repair unbootable disks due to LIF area
corruptions, such as a missing or corrupted LABEL file, or the need to repair disks in a
mirror environment.

7. Before the disk can be repaired, we should investigate several consequences of booting in
this mode.

• Note the error message(s) that appear on the screen during bootup.

What is the default run level? _____________________________________

• Did the system require you to login? ____________

• Use the vgdisplay command to display the status of the volume group.

# vgdisplay /dev/vg00

What is the status? _____________________________________________

• Use the mount command to list the file system(s) that are mounted.

What are they? ________________________________________________

8. Use the vgchange command to activate the volume group.

# vgchange –a y /dev/vg00

9. Use vgdisplay again to view the status.

What is the status? _________________________________________________

10. Use the mount –a command to mount all of the file systems.

# mount -a

What file does the mount –a command use that lists all of the file systems and the
mount points?

/etc/_______________

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-6
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

11. Use the mount command again to confirm that all file systems were mounted.

# mount

Are they all mounted? ______________________________________________

The system is now in a state where the volume group is activated and all commands
available to use to repair the corruption.

12. Use the lifls command to verify the status of the LIF area on the boot disk.

# lifls –l /dev/dsk/cXtYdZ

What is the status of the LABEL file? __________________________________

13. Use the mkboot command to put a new set of ISL utilities on the disk.

# mkboot /dev/dsk/cXtYdZ

14. Use the lifls command again to verify the status of the LIF area on the boot disk.

# lifls –l /dev/dsk/cXtYdZ

What is the status of the LABEL file? __________________________________

15. Use the lvlnboot command to view the definitions for the boot, root, swap, and
dump logical volumes. Compare the output to step 1.

boot __________________________________________

root __________________________________________

swap __________________________________________

dump __________________________________________

16. You should see that there is no boot logical volume configured. This is normal behavior
of the mkboot command. It can be fixed with the lvlnboot –b command. Since
the kernel file is in the /stand directory, and the /stand directory is in
/dev/vg00/lvol1, it will be used as the argument to the lvlnboot command.

# lvlnboot –b /dev/vg00/lvol1

17. Use the lvlnboot command again to view the definitions for the boot, root, swap, and
dump logical volumes and assure they are correct.

18. When the output of lvlnboot is correct, reboot the machine.

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-7
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

19. Verify that there are no LVM or mount start-up error messages and the output of
lvlnboot –v, and mount (or bdf) are correct. If the output of mount (or bdf)
shows the root directory mounted on /dev/root, proceed to step 20; otherwise, go to
TASK 4.

20. HP-UX keeps a list of mounted file systems in the file named /etc/mnttab. This file
was not updated properly, and has an improper entry. Fix the file by:

• Remove /etc/mnttab with the rm command

• Re-mount all of the file systems with the mount –a command. Ignore the errors
concerning file systems already being mounted.

• Verify that the output of the mount (or bdf) shows the correct mounts.

• A reboot should mount the root file system as expected.

This completes the task of booting a disk in maintenance mode to replace the LIF area of the
disk. You saw the consequences of a missing or corrupted LABEL file, booting in
maintenance mode, rebuilding the LIF area and fixing the /etc/mnttab file.

TASK 4: INVESTIGATE THE IMPORTANCE OF TH E /etc/lvmtab FILE AND


REBUILD IT:

Purpose:

The purpose of this section is to demonstrate the importance of the /etc/lvmtab file by:

1. Viewing the file


2. Moving the file to another name
3. Attempting an LVM command and viewing the result
4. Rebuilding the file with the vgscan command

A. View the contents of the /etc/lvmtab file

1. Since the /etc/lvmtab file contains non-printable characters, use the strings
command to display the file.

# strings /etc/lvmtab

2. Use the vgdisplay command to view the status of vg00.

# vgdisplay

3. Rename /etc/lvmtab to /etc/lvmtab.bk

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-8
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

# mv /etc/lvmtab /etc/lvmtab.bk

4. Use the vgdisplay command again to view the status of vg00.

# vgdisplay

What was the result? ______________________________________

5. Create the /etc/lvmtab file by executing the command vgscan.

# vgscan -v

You might get the following messages as part of the result of the vgscan command:

Vgscan: Couldn’t access the list of physical volumes for volume group

Physical Volume “/dev/dsk/cXtYdZ” is not part of a volume group


/dev/vg00
/dev/dsk/cAtBdC

6. Examine the contents of /etc/lvmtab with the strings command.

7. Is the result the same as those of step 1 of this task? ______________________________

TASK 5: CONFIRM THE IMPORTANCE OF THE /etc/lvmtab FILE AND


REBUILD IT AFTER ITS CORRUPTION:

Purpose:

The purpose of this section is to further confirm the importance of the /etc/lvmtab file
by:

1. Destroy a disk in a two-disk volume group


2. Create an erroneous /etc/lvmtab file with vgscan (missing one disk)
3. Rebuild the /etc/lvmtab through the use of vgexport and vgimport
commands

Let's look at a situation where one disk gets destroyed in a 2-disk volume group (failure or
user error). In trying to fix the problem, the /etc/lvmtab file is removed and rebuilt
without the “problem” disk. This results in a situation where the volume group can be
created with one physical volume but the information in the VGRA on the physical volume
still expects to find two physical volumes in the volume group. A vgextend should add a
second physical volume to /etc/lvmtab but this increases the number of disks to three
in the VGRA.

1. Reduce /dev/vg00 to contain only the root disk.

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-9
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

2. Use pvcreate to clean all physical volumes except the root physical volume.

3. Now, create a new volume group with two disks.

WARNING Ensure that you do not choose two special device files pointing to the same
device (e.g. pvlinks).

# pvcreate -f /dev/rdsk/<first_phys_vol>
# pvcreate -f /dev/rdsk/<second_phys_vol>
# mkdir /dev/vgstruct
# mknod /dev/vgstruct/group c 64 0xZZ0000
# vgcreate /dev/vgstruct /dev/dsk/<first_phys_vol>
# vgextend /dev/vgstruct /dev/dsk/<second_phys_vol>
# strings /etc/lvmtab

4. Simulate a failure on the second physical volume.

# dd if=/stand/vmunix of=/dev/rdsk/<second_phys_vol> \
bs=1024k

5. Flush memory and reload the volume group information.

# vgchange -a n vgstruct
# vgchange -a y vgstruct

You should receive an error indicating “quorum” is NOT met.

6. Activate the volume group and turn off quorum.

# vgchange -a y -q n vgstruct

You should get a message concerning cross-device link.

7. Try to remove the second physical volume.

# vgreduce vgstruct /dev/dsk/<second_phys_vol>

8. View the /etc/lvmtab file.

9. Rename /etc/lvmtab

# mv /etc/lvmtab /etc/lvmtab.ori

NOTE Although you might know the proper way to fix this problem, do the steps as
listed to observe what can and has happened in real situations.

10. Rebuild the /etc/lvmtab file.

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-10
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

# vgscan –v

Did you receive an error about vgstruct? ____________________


Did it include the volume group vgstruct? ___________________

You should have only one physical volume associated with vgstruct.

View the /etc/lvmtab file again to see the results of this vgscan.

11. Deactivate the volume group.

# vgchange -a n vgstruct

Observe the messages that are displayed. Did the volume group deactivate? _______

12. Put the LVM information back on the physical volume that you corrupted.

# vgcfgrestore -n vgstruct /dev/rdsk/<second_phys_vol>

Will this add this physical volume to /etc/lvmtab? __________________

13. Reactivate the volume group and observe the messages.

# vgchange -a y vgstruct

You should see a problem where it cannot see the second device.

NOTE If you do NOT see an error, execute vgdisplay vgstruct and observe
the Current Physical Volumes. This should be 1 instead of 2. You can also do
a pvdisplay /dev/dsk/<second_phys_vol> and this should result
in an error.

14. Try to extend the volume group to include this physical volume.

# vgextend vgstruct /dev/dsk/<second_phys_vol>

Did it work? _____________________________________________________

vgextend recognizes volume group information on the disk and will NOT extend the
volume group.

15. Use pvcreate to recreate this physical volume for LVM use.

# pvcreate -f /dev/rdsk/<second_phys_vol>

16. Extend the volume group to include this physical volume.

# vgextend vgstruct /dev/dsk/<second_phys_vol>

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-11
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

Note the error message: The kernel indicates ________ disks for
/dev/vgstruct and /etc/lvmtab has _________ disks.

17. Display the contents of vgstruct and note the number of Cur(rent) PVs.

# vgdisplay vgstruct

Cur PV _______________________________

Does this match what is in /etc/lvmtab?

18. Deactivate and then reactivate the volume group. Observe the message displayed.

# vgchange -a n vgstruct
# vgchange -a y vgstruct

19. The volume group expects two physical volumes but only one is listed in the
/etc/lvmtab file. Sometimes you may get a message about “Cross Device Links”.
To fix this problem, export this volume group, put the vgcfgbackup data back on the
disks and reimport the volume group.

# vgchange -a n vgstruct
# vgcfgrestore -n vgstruct /dev/rdsk/<second_phys_vol>
# vgcfgrestore -n vgstruct /dev/rdsk/<first_phys_vol>
# vgexport vgstruct
# mkdir /dev/vgstruct
# mknod /dev/vgstruct/group c 64 0x0?0000
# vgimport vgstruct /dev/dsk/<first_pv> \
/dev/dsk/<second_pv>

# vgchange -a n vgstruct
# strings /etc/lvmtab

The purpose of these last few steps were to emphasize the relationship between what the
/etc/lvmtab file contains as appropriate information and what is stored on the physical
volume(s) of a volume group. Trying to understand and interpret the often “cryptic”
messages can make sense if you understand these relationships.

Although some of the steps seem “unusual”, this scenario HAS occurred (more than once) by
using vgscan at the wrong time.

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-12
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Module 23: Lab

This page is left intentionally blank

Date 13-Nov-00 CES2-DISTANCELVM


23.doc HSD Field Development
23-13
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Appendix A: Quick Reference

LVM COMMANDS - QUICK REFERENCE

Physical Volume Commands


pvcreate Makes a disk an LVM disk (a physical volume).
pvdisplay Displays information about physical volumes in a volume group.
pvchange Sets physical volume characteristics to allow or deny allocation of additional
physical extents from this disk.
pvmove Moves allocated physical extents from source to destination within a volume
group.

Volume Group Commands


vgcreate Creates a volume group.
vgdisplay Displays information about volume groups.
vgchange Activates or deactivates one or more volume groups. Allows a volume group
to mount with or without a quorum.
vgextend Extends a volume group by adding disks to it.
vgreduce Reduces a volume group by removing one or more disks from it.
vgscan Scans all disks and looks for logical volume groups.
vgsync Synchronizes mirrors that are stale in one or more logical volumes.
vgremove Removes definition(s) of volume group(s) from the system.
vgexport Removes a volume group from the system without modifying the information
found on the physical volume(s).
vgimport Adds a volume group to the system by scanning physical volumes which have
been exported using vgexport.
vgcfgbackup Saves the configuration information for a volume group.
Remember that a volume group is made up of one or more physical volumes .
vgcfgrestore Restores the configuration information for a volume group.

Logical Volume Commands


lvcreate Creates a logical volume.
lvdisplay Displays information about logical volumes.
lvchange Changes characteristics of logical volume including availability, scheduling
policy, permissions, block relocation policy, allocation policy, mirror cache
availability.
lvextend Increases disk space allocated to a logical volume.
extendfs Extends the size of a filesystem residing on a logical volume.
lvreduce Decreases disk space allocated to a logical volume.
lvremove Removes one or more logical volumes from a volume group.
lvsplit Splits a mirrored logical volume into two logical volumes.
lvmerge Merges the lvsplit logical volumes into one logical volume.
lvsync Synchronizes mirrors that are stale in one or more logical volumes.
lvmmigrate Prepares a root file system for migration from partition to a logical volume.
lvlnboot Used to set up a logical volume to be a root, primary, swap, .
lvrmboot Use this if you don’t want a logical volume to be a root, primary, swap, or
dump volume.

Date 13-Nov-00 CES2-DistanceLVM


AppendxA.doc HSD Field Development
A-1
SUCCESS WITH LOGICAL VOLUME MANAGER
♦ Appendix A: Quick Reference

Date 13-Nov-00 CES2-DistanceLVM


AppendxA.doc HSD Field Development
A-2

You might also like