You are on page 1of 7

People showed me two alternatives to recreate /etc/lvmtab:

1) vgscan ( many people suggested this one, but seeing the man
it has so many warnings and "this is a last resort
stuff that I 'll try to avoid it. Anyone who really
used it?)

2) vgexport and then vgimport

The problem comes from incorrect count of PVs between lvmtab and the kernel.

(kernel thinks it has a disk it hasn 't).

The other alternative is to run vgreduce -f vgname

This command searchs and forcibly eliminates disk it can 't find.
In my case, the problem was that I left one mirror copy of a logical volume
pointing to that disk. To remove it:

lvdisplay -k /dev/vgname/lvname (you need -k in order to show the pv number

instead of "/dev/dsk/c_t_d_" because
you have there is "?????")

lvreduce -k -m 0 /dev/vgname/lvname pvnumber

And now that the physical volume was truly empty, I run vgreduce -f
and that made the trick.

The "standard procedure" to replace a disk, according to most people, is

- remove the bad drive

- put the new one
- pvcreate ...
- cgcfgrestore -n /dev/vgname /dev/dsk/c_t_d_
- vgchange -a y /dev/vgname
- vgsync /dev/vgname



-----Original Message-----
Today, I 've tried to lvremove a logical volume and got an error
The same that I paste here from a vgcfgbackup:

sid02_en_06#vgcfgbackup vg03

vgcfgbackup: /etc/lvmtab is out of date with the running kernel:Kernel

4 disks for "/dev/vg03"; /etc/lvmtab has 3 disks.

Cannot proceed with backup.

I think it is a consecuence of a change of disks I 've made last week. The
was destroyed (yes, it literaly stoped functioning, not even seen in
I lvreduced the LVs, vgreduce the VG and took out the disk, but now I have

What can I do to recover ?

For next time, what procedure you follow to safely remove a destroyed disk ?
(obviusly, I have everything mirrored).

lvmtab is out of date with running kernel

When doing any LVM command for a particular volume group it errors with:

vgcfgbackup: /etc/lvmtab is out of date with the running kernel:

Kernel indicates # disks for "/dev/vg_name"; /etc/lvmtab has # disks.
Cannot proceed with backup.

The above error indicates a serious problem with the volume group. No changes
should be made to the volume group configuration prior to repairing the volume

This error indicates that vgdisplay(1m) information Cur PV and Act PV

disagree. Cur PV and Act PV should always agree for the volume group. This
error also indicates that the /etc/lvmtab file, which is used to match which
physical volumes belong to a volume group, is out of date with the LVM data
structures in memory and on disk. vgcfgbackup(1m) cannot complete
successfully whenever the number of current physical volumes disagrees with the
number of active physical volumes. Modifying the volume group while in this
state could cause the vgcfgbackup(1m) backup of the volume group to be
inconsistent with the volume group itself and resulting in a more difficult
repair/recovery process.

Each physical volume of each a volume group has a counter indicating the number
of physical volumes currently within the volume group. This information is
contained within the disk's volume group reserve area (VGRA). The above error
indicates the information within the VGRA shows a different number of physical
volumes than the system currently sees attached to this volume group. At
group activation time the /etc/lvmtab file is used by the system to know what
physical volumes belong to each volume group.

This document will explain what to look at and how to repair this situation.
Use the following steps to isolate and repair the problem:

1. Try to locate missing disk device(s).

Isolating what happened to the volume group to get it into this state can
be very difficult. Here are some suggestions:
a. Use the command strings /etc/lvmtab or
vgdisplay -v /dev/vg_name to see what disk devices are
currently attached to the volume group.

b. Check the date of and physical volumes contained in the last good

Use: ll /etc/lvmconf/VG_NAME.conf to see the date of the last

good backup.

NOTE: If the volume group has been modified since the time of the
last good vgcfgbackup, then there is the potential that the backup file
is out of date with the LVM data structures on the disk(s) attached to
this volume group. If this is the case then vgcfgrestore(1m)
may no longer work for this volume group.

Use: vgcfgrestore -n /dev/VG_NAME -l to see the list of

physical volumes contained within the last good backup.

Use the list from the above vgcfgrestore(1m) command to compare

to the list from step 1a to see if there are differences. If there is
a physical volume listed in the backup listing that is not in the
/etc/lvmtab file, you may be able to vgcfgrestore(1m) to that
physical volume. Make sure the physical volume is unused before
overwriting with vgcfgrestore. See step 2 for details.

c. Check the system for old copies of /etc/lvmtab file.

A common reason for the system to be in this state is that the lvmtab
has been recreated while unable to communicate with one or more of the
physical volumes belonging the the volume group. If the lvmtab is
recreated while the system is unable to query a physical volume, that
physical volume will not be added to the lvmtab file. One can see how
this can cause the lvmtab to mismatched with kernel memory.

To check for old copies of /etc/lvmtab using the following:

ll /etc/lvmtab* or ll /tmp/lvmt*

Use strings(1)on the backup lvmtab file to try to determine

which disk device(s) in the volume group differ in comparison
with the current lvmtab file.

If the missing physical volume cannot be determined or the missing physical

volume cannot be readded to the problem volume group then skip to Step
Reasons the physical volume could not be readded might be it has been added
to another volume group and cannot be removed or is no longer physically
connected to this system.

2. If possible, restore the missing physical volume into the volume group.

a. Verify the missing physical volume and it's alternate path(s), if

necessary, are not in use.

Use strings /etc/lvmtab to verify that the physical volume and

any of it's alternate paths do not belong to any other volume groups or
are not mounted or in use by applications on the system. A common
error that can lead to this type of problem is when an alternate path
is added to a different volume group than the primary path. Consult
your disk device's manuals to determine if it is capable of using
pvlinks or alternate paths. The system will not allow an alternate
link to a different volume group unless pvcreate -f
is executed. pvcreate(1m) is not needed to add an alternate
path to a volume group and should be only run on the primary path.

b. If missing physical volume(s) and alternate path(s) are not in use then
use vgcfgrestore(1m) to restore the physical volume(s) to the
volume group.

Example: vgcfgrestore -n /dev/vg03 /dev/rdsk/c2t7d2

c. If needed restore or recreate /etc/lvmtab.

If the /etc/lvmtab does not contain the physical volume(s) that were
vgcfgrestored to, then this file must be updated. If the lvmtab shows
the correct physical volumes then skip this step.

If the /etc/lvmtab does not show the correct physical volumes and you
were able to find an old /etc/lvmtab file in the previous step then save
the current version of /etc/lvmtab and copy the old lvmtab backup file
into place. Use the strings(1) command to insure all volume
groups show the correct physical volume before changing the lvmtab file.

If the /etc/lvmtab file is not correct and there is no old copy of

/etc/lvmtab that is correct use vgexport(1m) and vgimport(1m)
to correct the lvmtab.

NOTE: For the root volume group, typically vg00, you must first
boot into lvm maintenance mode. See below for details

1. vgchange -a n /dev/vg_name
2. vgexport -m /tmp/mapfile /dev/vg_name
3. mkdir /dev/vg_name
4. mknod /dev/vg_name/group c 64 0x0X0000

NOTE: The minor number (0x0X0000) must be unique for each volume
group. Substitute X for a number not in use on the system.
Use: ll /dev/*/group to see existing group files on the

Example: mknod /dev/vg01/group c 64 0x010000

5. vgimport -m /tmp/mapfile /dev/vg_name /dev/dsk/pv_name

NOTE: The above commmand requires each of each physical volume in

volume group to be specified at the end of the command. This
allows the lvmtab to be correctly rebuilt with all physical
volumes belonging to the volume group.
d. Activate the volume group.

Use: vgchange -a y /dev/vg_name to activate the volume group

after the restoring any missing physical volumes. If all was completed
correctly then vgdisplay /dev/vg_name should show Cur PV and Act
PV now agree.

e. Get a backup of the volume group.

Use vgcfgrestore /dev/vg_name to insure there is a good

volume group backup now that Cur PV and Act PV agree.

3. If you cannot locate the missing disk device or cannot restore that
device back into the volume group, then use vgreduce(1m) to forcibly
reduce out the missing physical volume.

NOTE: vgreduce -f should be used as a last resort. If

vgcfgrestore(1m) cannot be used to make the Cur PV and Act PV agree,
then vgreduce -f may be required. Here are the steps to successfully
use vgreduce -f:

a. Get a list of logical volumes belonging to the volume group.

Use: vgdisplay -v /dev/vg_name to get a list of logical volumes

for the volume group.

b. Find out which logical volume(s) reside on the disk device(s) to be

forcibly reduced.

Use lvdisplay -v /dev/vg_name/lv_name | more to see if any

of the logical volumes extents show ??? in the PV section. Page
through every logical extent for each logical volume in the volume
group. ??? indicate that the extents shown reside on a physical
volume that the system is unable to query. Any logical volume with
??? will have to be removed using lvremove(1m) in order for
vgreduce -f to complete successfully.

c. Remove logical volumes with ??? in their lvdisplay(1m) output.

Since logical volumes that show ??? have missing or unavailable data
they will have to removed. In order for vgreduce -f to succeed
all logical volumes with extents on the physical volume to be reduced
must first be removed. Once the volume group is in the correct state,
Cur PV = Act PV, the logical volumes can be recreated and any lost data

Use: lvremove /dev/vg_name/lvol_name

d. Forcibly reduce out the physical volume.

Use: vgreduce -f /dev/vg_name

NOTE: The above command does not require a physical volume argument. It
must be run on a active volume group.
e. If the vgreduce -f command does not work or does not give any
error and vgdisplay(1m) still shows that Cur PV and Act PV
disagree then use the following steps to vgexport and vgimport the
volume group prior to trying Step 3d again.

This procedure can be used when vgreduce fails to reduce a physical

volume that can no longer be queried by the system. If executing the
following procedure on the root volume group, usually vg00, you must
first boot into LVM maintenance mode (** For steps see below).

1. Get the /dev/vg_name/group minor number and physical volumes

belonging to the volume group.

Use: ll /dev/vg00/group to get 0x###### minor number.

vgdisplay -v /dev/vg_name to get physical volumes.

2. vgchange -a n /dev/vg_name
NOTE: Skip this step if booting maintanence mode for root volume

3. vgexport -m /mapfile /dev/vg_name

4. mkdir /dev/vg_name

5. mknod /dev/vg_name/group c 64 0x0#0000

Re-use minor number obtained from step 1.

6. vgimport -m /mapfile /dev/vg_name pv_name [pv_name ...]

NOTE: Specify all the physical volumes obtained from step 1. Do not
include the physical volume that you are trying to remove or
that couldn't be queried.

** Steps to boot into maintenance mode and active. :

1. shutdown -hy now
2. interrupt boot sequence
3. boot from primary boot path and interact with ISL

NOTE: Procedure used for steps 2 and 3 may very slightly depending
on machine model.

4. enter the following at the IPL> prompt:

IPL> hpux -lm (;0)/stand/vmunix

f. Retry the vgreduce -f command specified in step 3d.

This time the vgreduce should succeed and give you a message
similar to: "PV with key # sucessfully deleted from vg /dev/vg_name".
It should also display:

Repair done, please do the following steps.....:

1. save /etc/lvmtab to another file
2. remove /etc/lvmtab
3. use vgscan(1m) -v to recreate /etc/lvmtab
4. NOW use vgcfgbackup(1m) to save the LVM setup

Follow the above 4 steps.

vgdisplay /dev/vg_name should now show Cur PV and Act PV