Recovering Proxmox VMs from Encrypted Raid Array

Initial Thoughts: Blog Reboot

It has been a few years since this Blog has been active, or frankly seen the light of day, beyond some web caches I enabled awhile back. The biggest reason for the blog phasing out, was the first server I ever built (back in 2010) having a failing mother board shortly after the birth of our first child. Now, shortly after the birth of our second child and with the abundance of free time this crisis has awarded us all, I’ve decided to give the blog and my good old Proxmox server a reboot.

Recovering Raid Array

Now since I didn’t have the original working hardware and I didn’t retain good VM backups, I had to recover the VMs from just the encrypted drives. That meant the first step to beginning this journey of recovering Proxmox VMs, was dealing with an encrypted raid 1 storage array.

To begin this recovery process I simply put the two ”high-end” 120 GB SSD drives, in the new server I built to run the latest version of Proxmox. Next we need to boot up the server and use mdadm to examine the drives to identify the raid format.

lsblk # to list all the current block devices

mdadm -E /dev/sd[c-d] #use mdadm to examine 

Then we can use mdadm to re-assemble the raid. In this case, since the the drives are mirrors it likely doesn’t matter, but for some individuals this will likely be a required step in order to complete a full recovery.

mdadm --assemble /dev/md1 /dev/sd[c-d]1 # use mdadm to re-assemble
# the raid as md1

Decrypting LVM Structure

Next we need to decrypt the raid drive so we can access the logical volumes and mount them to begin recovering Proxmox VMs from the encrypted file system.

cryptsetup luksOpen /dev/md1 encrypted_pve # Open the luks encrytped device

Now that the the encrypted device has been opened and mounted as the device encrypted_pve. We can now access the volume group within the encrypted partition with standard LVM commands.

vgdisplay --short # for instance just viewing the volume group

Now if your somewhat lazy like me and doing all of this work on the a new installation of Proxmox. Your likely going to start having a bad time and encounter issues because by default, Proxmox wants its primary volume group to be named ‘pve’ and now there are two different ones with the same name. In order to avoid confusion going forward and stop our new Proxmox server from crashing, we should rename the old ‘pve’ volume group to something else.

vgdisplay | egrep -i "uuid|name|VG size" # use vgdisplay to 
# find the UUID of the VG

vgrename QiOPy3-WhF8-RYns-44dY-FKKN-8orV-bEi3m3 pve-old # use vgrename
# to simply rename the VG using the appropriate UUID

vgscan  # lastly we can use vgscan to reload all the vgs as a 
# sanity check

Recovering Proxmox VMs Config files

Now that we have access to the original LVM structure we can go about recovering VMs from the file system. To start this process we should mount the two logical volumes Proxmox has by default. These two would be the root and data logical on our old ‘pve’ volume group.

mkdir /mnt/old-data /mnt/old-root #create directories to mount
# logical volumes

mount /dev/mapper/pve--old-root /mnt/old-root # mount old pve root

mount/dev/mapper/pve--old-data /mnt/old-data # mount old pve data

Here is the point where things diverge a bit depending on what version of your recovering from and how your VMs where originally setup. Regardless there are really only two components to a given VM in the Proxmox world, a config file and a disk image.

To recover the VM config files in newer versions of Proxmox you would just go into the /mnt/old-root/etc/pve/qemu-server/ directory. There you would likely see a bunch of VM configuration file named after the VM ID number. Generally speaking you should be able to copy these configs over to your new Proxmox server without too much of an issue.

cd /mnt/old-root/etc/pve/qemu-server/ # go to the mounted config 
# directory

cp 100.conf /etc/pve/qemu-server/

However, if your like me and you are looking around for these VMID.conf files and they aren’t on the old root filesystem. That’s because in older versions Proxmox the VM configs were stored in a sqlite database. Instead we can use the sqlite3 command on the pve-cluster config database in order to view all the VM config files and extract the ones we want directly to workable config files.

sqlite3 /mnt/old-root/var/lib/pve-cluster/config.db \
'SELECT * FROM tree;' # use sqlite3 to view all of the 
# config data in the cluster config database

sqlite3 /mnt/old-root/var/lib/pve-cluster/config.db \
'SELECT data FROM tree WHERE name = "100.conf";' \
> /etc/pve/qemu-server/100.conf 
# use a sql query to extract just the config file data 
# we need and write it to the appropriate file/code>

Recovering Proxmox VMs Raw Disks and Disk Images

When it comes to VM disk images they can be either raw, meaning they are logical volumes provisioned within a volume group using LVM or disk image files like qcow2's. Regardless of which type of VM disk image or if like me and had both. The path forward is basically the same, just copy it over to appropriate place on the never server.

In the case of raw disks, all we really need to do is copy the logical volume that was provisioned in the old volume group over to the new volume group. To do this we need to create a new logical volume on the the new volume group with the same size and name. Then use dd to copy over the raw data from the old logical volume to the new one.

lsblk # look at all our physical and logical devices to
# make sure we use the right devices

lvcreate -n vm-100-disk-1 -L 10G pve # Create the new logical 
# volume with the same name and size

# dd if=/dev/pve-old/vm-100-disk-1 bs=4096 of=/dev/pve/vm-100-disk-1 
# Use dd to complete a bit by bit copy of the LV data, ! Caution !

If you need to deal with the VM disk files, its pretty straight forward as well. Just go into the old-data directory we mounted and copy the disk image files over to the new storage location. These can be a various formats like qcow2 or vmdk, but the process is the same.

mkdir /var/lib/vz/images/100 # create the folder for your VM

cp /mnt/old-data/images/100/vm-100-disk-1.qcow2  \
/var/lib/vz/images/100/vm-100-disk-1.qcow2 
# copy over the disk image to appropriate local folder

Getting VMs to Boot

At this point Proxmox should have seen the VM configuration files added to the local nodes configuration directory and it should be visible in web UI and/or qm at the command line. The very last step in recovering Proxmox VMs is making sure your VM configuration is correct so it can boot up. I could probably do research and a whole blog post on this topic alone. So its kind of difficult to provide detailed examples of what could be wrong with a given configuration file. Since the configuration files are from working VMs, most likely issues are either device statements or the boot order.

The boot order can be forced with the bootdisk option, to a given device, such as ide0. Devices are registered in the config file as device statements like 'ide0: local:vm-100-disk-1'. Make sure your device statements are correct and your cdrom is empty such as 'ide2: none,media=cdrom' to avoid boot issues. For other errors make be sure review the documentation or ask in the comments bellow.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.