In some cases, someone could overwrite a pv header configuration on a disk, or write a disk device that is one path to a multipath device.

Is there a way to recovery this mistake?

Surely yes, LVM pvcreate command write the header configuration on the physical volume, there is no format action. The vgdisplay command will show no problem until the system is rebooted or the original volume group is reactivated.

Here is how

Assume that accidentally the pvcreate -f command was run on the /dev/sdaa physical volume, that is an alternate path of /dev/sdbb, they both pointing to a multipath device /dev/mpath24.

The vgdisplay command shows two disks, both are available, the current and active PV shows the same number (2).

When the server was rebooted, LVM was not able to understand the configuration and the problem reveals itself.

Recovery steps

The following procedure will require a valid prior LVM volume group configuration file and also a map file.

1. Make sure the volume group is deactivated(may not needed if the server rebooted already).

# vgchange -a n /dev/<vgname>
Volume group "<vgname>" has been successfully changed.

2. Verify the PV list in the configuration file shows the previous configuration.

# vgcfgrestore -f /dev/lvmconf/<vgname>.conf.old
Volume Group Configuration information in "<vgname>.conf.old"
VG Name /dev/<vgname>
 ---- Physical volumes : 1 ----
    /dev/mapper/mpath24 (Non-bootable)

3. Restore the VG header information from previous configuration file. It should be restore to the old physical volume.

# vgcfgrestore -f /etc/lvmconf/<vgname>.conf.old /dev/mapper/mpath24
Volume Group configuration has been restored to /dev/mapper/mpath24

4. Remove VG configuration out of the lvmtab.

# vgexport /dev/<vgname>

5. Recreate VG directory:

# mkdir /dev/<vgname>

6. Recreate the volume group "group" node file. Remember that the minor number must be unique.

# mknod /dev/<vgname>/group c 64 0x0#0000

7. Reimport the volume group. LV structures will be recover from map file.

# vgimport -m /tmp/<vgname>.map /dev/<vgname> /dev/mapper/mpath24
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.

8. Reactivate volume group. Some warning messages may be received.

# vgchange -a y /dev/<vgname>
Volume group "<vgname>" has been successfully changed.

9. Verify the current LVM configuration. Some parameters to check are current and active PV number (both should be the same), the Used PVs parameter for each logical volume and the status of the PV (Should be available):

 # vgdisplay -v /dev/<vgname>
 --- Volume groups ---
 VG Name                     /dev/<vgname>
 VG Write Access             read/write
 VG Status                   available
 Max LV                      255
 Cur LV                      1
 Open LV                     1
 Max PV                      16
 Cur PV                      1
 Act PV                      1
 Max PE per PV               1000
 VGDA                        2
 PE Size (Mbytes)            4
 Total PE                    1000
 Alloc PE                    100
 Free PE                     900
 Total PVG                   0
 Total Spare PVs             0
 Total Spare PVs in use      0
 
    --- Logical volumes ---
    LV Name                     /dev/<vgname>/lv01
    LV Status                   available/syncd
    LV Size (Mbytes)            400
    Current LE                  100
    Allocated PE                100
    Used PV                     1
 
 
    --- Physical volumes ---
    PV Name                     /dev/mpath24
    PV Status                   available
    Total PE                    1000
    Free PE                     900
    Autoswitch                  On
    Proactive Polling           On