Don't Modify Your LVM Root Logical Volume (and How to Fix it When You Do)
Lucas Teixeira
published Dec. 25, 2024, 8:47 p.m.
I am currently in the process of reincarnating an old secondary gaming PC from high school (that's a sentence that dates me) and giving it new life as the beginning of my homelab/home infrastructure/home server/whatever you want to call it set up. As you can imagine, my starting point here storage wise is a few cobbled together drives I had laying around:
- One 3.5" 1TB HDD with an old Windows 10 install on it
- One 2.5" 500GB HDD that was in an external exclosure for... a while
- One 2.5" 128GB SATA SSD (this is the OS drive)
While going through initial setup, I figured I'd leave all but the SSD unplugged. I had always done that when setting up Windows systems since it made it easy to figure out which drive is which and leaves no room for error. At this point, I had never heard of today's main character: Logical Volume Manager (LVM), but I'm not sure that I would have done anything different yet even if I had. This will make sense later, when the problem becomes clear.
During installation, I was asked if I wanted to set up LVM (Logical Volume Manager). A bit of Googling (Perplexitizing just doesn't have the same ring...) later I was convinced - easier drive management, particularly when adding extra drives later, or wanting to move data around. Exactly the kind of thing I'm liable to do with a risen phoenix of a home server. I went about the rest of the install with no major issues, and before I knew it I had my own server that would automatically boot into Debian on startup and be securely available over SSH.
Well, it would if I had a keyboard plugged in. If I didn't, it would still boot to Windows despite the Debian driving being first in the boot order and ensuring the drive with Windows on it was disabled as boot device in the motherboard's BIOS. I figured that's a mystery for another day, since I'm planning on completely wiping out the boot partition/filesystem on this drive anyway, but it's still odd. If you know what could have caused this, please let me know!
Adding Drives
Now that it was time to add drives to the system, I realized I set it up with LVM and had no clue how to do that. If you look around online for how to do this, you may come across something like the following:
Unmount and Prepare Previously Used Disk
- Identify disk with
fdisk
orlsblk
fdisk -l
lsblk [-S] - Unmount existing partitions (X=drive letter, Y=partition)
umount /dev/sdXY
- Wipe the partition table (X=drive letter)
wipefs --all /dev/sdX
Set up in LVM (courtesy of StackOverflow, with minor edits for continuity with this context)
- Convert your new disk to a physical volume (PV) (in this case, the new disk is 'sdb').
sudo pvcreate /dev/sdb
- Add the physical volume to the volume group (VG) via 'vgextend'.
sudo vgextend debian-vg /dev/sdb
- Allocate the physical volume to a logical volume (LV) (extend the volume size by your new disk size).
sudo lvextend -l +100%FREE /dev/debian-vg/root
- Resize the file system on the logical volume so it uses the additional space.
sudo resize2fs /dev/debian-vg/root
The Problem
This is probably obvious to anyone familiar with LVM - but notice that this series of commands adds the new drive to the default volume group (debian-vg
in the example) and extends the root logical volume to utilize that entire volume group! This means that your new drive and your old drive will essentially act like one big drive as far as storage space is concerned, without needing anything else mounted. As soon as I did this, I realized this is not what I wanted - I didn't want my system to stripe data across an old HDD and an old SSD as if they were the same volume. To add insult to injury, I quickly found out that while you can online resize the root filesystem to enlarge it, reverting that is not so simple, and cannot be done while booted into the system. Fun ðŸ«
Earlier I said that I don't think I would have done anything differently regarding having drives plugged in during install despite these issues. That's because as far as I can tell, the default would have been to lump all drives into one volume group and into one Logical Volume, and I would have found myself in the same spot.
Cleaning up the Mess
Like I said above, the deed cannot be undone while the system is live. Luckily, I still have the net install USB kicking around, and it comes with a live recovery environment. Any live bootable drive should work for this, but this did the trick for me.
I should also add that these steps should be safe as long as the total used storage in the newly created LV can still fit on the original single drive. If not, you will need to reduce storage usage until that it the case before performing these steps. However, you are still messing with your root filesystem, so the possibility for error remains. I was not worried about this since my system was just set up and I could lose everything on it without concern, but you might not be in the same position, so be careful.
Either way, let this serve as a warning to move carefully and go slowly, and to do so at your own risk.
- Boot into a live USB or live CD
- Run the checker on the volume group
e2fsck -f /dev/mapper/debian-vg--root
- Resize the root filesystem (100G here is arbitrary - just make sure it fits on the root drive)
resize2fs /dev/mapper/debian-vg--root 100G
- Shrink the logical volume
lvreduce -L 100G /dev/mapper/debian-vg--root
- Move any necessary extents from the new drive
pvmove /dev/sdX
- Remove the new physical volume from the volume group
vgreduce debian-vg /dev/sdX
- Resize the Logical Volume to use up the full volume group again (now that it only contains the physical volume we want)
lvextend -l +100%FREE /dev/mapper/debian-vg--root
- Resize the root filesystem to use the entire resized Logical Volume
resize2fs /dev/mapper/debian-vg--root
At this point, the new physical volume is removed from the root volume group, the Volume Group is resized to not contain the new drive, and the root Logical Volume is resized to match the entirety of the corrected root volume group.
This was very painful to figure out at 2am the same day I first learned what LVM was.
The Right Way
Now, how can we do this the right way and actually get what we want? We're going to create a new volume group and assign both HDDs to it. Then, we're going to create a new Logical Volume that is assigned the entire capacity of our new volume group, and mount it to the system as available storage. This way we can ensure that no data on the primary volume is stored on a spinning disk (which would be the case if we only made a new Logical Volume but not a new volume group). As I add more drives in the future I may split this volume group out into multiple Logical Volumes, but for now one will do.
Here's the full process to go through to add a new drive in this manner, for reference
- Identify disk with
fdisk
orlsblk
fdisk -l
lsblk [-S]
- Unmount existing partitions (X=drive letter, Y=partition)
umount /dev/sdXY
- Wipe the partition table (X=drive letter)
wipefs --all /dev/sdX
- Create the physical volume
sudo pvcreate /dev/sdX
- Create a new volume group
- "debian-vg2" is just what I'm calling the logical volume, it's not an important name, as long as it doesn't conflict with other directories in /dev
vgcreate debian-vg2 /dev/sdX
- "debian-vg2" is just what I'm calling the logical volume, it's not an important name, as long as it doesn't conflict with other directories in /dev
- [Optional] Add extra drives to volume group
- Repeat steps 1-4 for each drive, but instead of the command in step 5, run (X=drive letter)
vgextend debian-vg2 /dev/sdX
- Repeat steps 1-4 for each drive, but instead of the command in step 5, run (X=drive letter)
- Create a new Logical Volume
- "data" is just what I'm calling the logical volume, it's not an important name
lvcreate -l +100%FREE -n data debian-vg2
- "data" is just what I'm calling the logical volume, it's not an important name
- Create a filesystem on the new logical volume
sudo mkfs.ext4 /dev/mapper/defiant--vg2-data
- Get the UUID of the LVM block device (find the line with the name of the LV)
blkid
- To mount the drive, use that UUID to create this entry at the bottom of /etc/fstab (replace /mnt/data with wherever you want to mount the new volume, creating that directory first)
- If you aren't sure what to edit the file with, I recommend either
nano
orvi
. I have personally always found nano unintuitive (neovim btw), but many find more beginner friendly.UUID=c66e41a9-0697-43d9-b8ba-66c9b53743c3 /mnt/data ext4 defaults 0 2
- If you aren't sure what to edit the file with, I recommend either
- Test the updated /etc/fstab and make sure it mounts without errors
mount -a
- Verify it mounted as expected (look for the LV name + expected capacity)
df -h
- My system recommended I reload the systemd manager configuration, so I did so
systemctl daemon-reload
And finally, we're done!
What Did We Learn Here Today?
Maybe that it's a bad idea to try to set up a server at 2am the night before leaving for Christmas vacation using a method of disk management you've never used before? Or maybe that learning things often comes with a little pain and that's part of the fun. I know I'll remember this better than I ever would have reading it in a textbook...
Either way, I learned a little bit about LVM. And I got my server up. We live to fight another day! Onwards and upwards.
Disclaimer
Setting up server infrastructure at home has been something I want to do for a while, but as a software engineer by training and day job, there's a lot of system administration and site reliability type things that frankly, I don't have to worry about very often. Even at a small company like mine - need a new mounted disk? More memory? A new configuration Salted? "Hey \<devops guy>, can I get \<thing>?".
That is to say, much of what I say in this realm comes from a technical but certainly non-expert perspective. The aim of writing this is a combination of documenting my journey, warning my future self of this mistake, preventing others from feeling the pain I felt, and improving my perspective through reflection and feedback. That is to say, take this with a bit of a grain of salt. Or use it as a guide, at your own risk!