WRITELOOP

A PRIMER ON LVM (WHEN SETTING UP PROXMOX)

I had done a new Proxmox setup on a beefier SSD, and by default proxmox uses LVM to handle disk 'partitioning', which was always a little cryptic to me. That looked like a great opportunity to learn something new that can be useful in the future, so I took some time to understand it and consolidate here for my future self when the need arises.

2023 September 30

First of, what is LVM?

LVM stands for Logical Volume Management. It’s a method used for disk management on Linux-based systems that provides a higher level of abstraction over traditional partitioning.

Here’s how LVM works and some of its advantages:

  1. Components:

    • Physical Volume (PV): This is the actual disk or partition that you initialize for use within LVM.
    • Volume Group (VG): This is a pool of disk space that’s made up from one or multiple physical volumes.
    • Logical Volume (LV): Once you’ve got a volume group, you can carve out chunks of space from it to create logical volumes. These LVs can be thought of as virtual partitions.
  2. Advantages:

    • Flexibility: You can easily resize logical volumes. If you run out of space on one LV, you can reduce the size of another and reallocate that space where it’s needed. This is much more difficult with traditional partitions.

    • Spanning Over Multiple Drives: With LVM, you can create a volume group that spans multiple physical drives. This allows you to present multiple disks as one logical volume to the system.

    • Snapshots: LVM provides the ability to take snapshots of any logical volume. This can be incredibly useful for backups, testing, or any scenario where you’d want to revert your system back to a previous state.

    • Dynamic Disk Resizing: If you add a new disk to your system, you can easily extend a volume group to include the new disk. Conversely, you can reduce the size of a volume group if a disk is removed (although you must ensure that the space is unallocated).

    • Encapsulation: With LVM, you can move logical volumes to different physical disks transparently to any applications using them.

    • Consistent Naming: Without LVM, if you add or remove drives, the device naming (e.g., /dev/sda, /dev/sdb) can change. With LVM, the logical volume names remain consistent, regardless of the underlying physical disk structure.

  3. Comparison to Traditional Partitioning:

Traditional partitioning involves carving up a disk into chunks and formatting those chunks for use. Once created, resizing, moving, or manipulating these partitions can be quite challenging and often requires third-party tools or risky operations. LVM abstracts the hard disk at a higher level, making these tasks straightforward and integrated into the Linux system.

In conclusion, while LVM introduces a bit more complexity in terms of setup and understanding, the benefits it provides in terms of flexibility, scalability, and features make it a favored choice for many Linux administrators and users, especially in dynamic environments or where storage needs might change over time.

How to know all volume groups I have?

For that I can use the command vgs. Here is an example of its' output:


$ vgs

  VG  #PV #LV #SN Attr   VSize  VFree
  pve   1   4   0 wz--n- <1.82t 45.00g

That means:

  • VG: The name of the volume group, which in this case is pve.

  • #PV: Number of Physical Volumes in this volume group. In that case, I have 1 physical volume.

  • #LV: Number of Logical Volumes in this volume group. In that case, I have 4 logical volumes.

  • #SN: Number of snapshots. I have 0, so no snapshots have been taken.

  • Attr: Attributes of the volume group. wz–n- has specific meanings: w: The volume group is writable. z: The volume group is resizable. n: Indicates that the VG is not clustered. The other characters have specific meanings, but in this case, they’re shown as -, which means the default or no special attribute set.

  • VSize: Total size of the volume group. This volume group has a total size of <1.82t (just under 1.82 terabytes).

  • VFree: Unallocated space within the volume group. It has 45.00g (45 gigabytes) of free space that hasn’t been allocated to any logical volume.

And how to get details on the logical volumes on a volume group?

For that I can use the command lvs. Here is an example of its' output:


$ lvs

  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-a-tz-- 915.33g             0.00   0.21
  nfs  pve -wi-ao---- 755.00g
  root pve -wi-ao----  96.00g
  swap pve -wi-ao----  32.00g

  • LV: Name of the logical volume. I have four logical volumes: data, nfs, root, and swap.

  • VG: The volume group the logical volume belongs to. All of my LVs belong to the pve volume group.

  • Attr: Attributes of the logical volume. These are specific to each LV type and setup. Here’s a brief breakdown: data LV has twi-a-tz–, indicating it’s a thin pool with some other attributes. The other LVs (nfs, root, and swap) have -wi-ao—-, indicating they are standard writable LVs that are currently active and opened.

  • LSize: Size of the logical volume. My data LV is 915.33g, nfs is 755.00g, root is 96.00g, and swap is 32.00g.

  • Pool and Origin: These relate to thin provisioning and cloning/snapshotting. In this case, only the data LV has relevant attributes, but there are no specific pool or origin LVs defined.

  • Data% and Meta%: Usage percentage for data and metadata for thin pools. Only the data LV has these attributes, showing 0.00% data usage and 0.21% metadata usage.

  • The other columns (Move, Log, Cpy%Sync, and Convert) are not used in my setup and can relate to other advanced LVM features like mirroring, moving, and converting LVs.

So, what is the TL;DR of my current setup on LVM at the “new” Proxmox Server?

In summary, I have a single volume group “pve” with a total size of just under 1.82 TB and 45 GB free space (from the previous vgs command). Within that VG, I have 4 logical volumes:

  • a thin pool named data
  • a standard partition for NFS (nfs - that I created manually on free space “VFree” I had on the volume group)
  • the system root partition (root)
  • swap space (swap).

And what about this “thin pool” mentioned?

It is a storage construct allowing oversubscribed disk space allocation, enabling logical volumes to appear larger than actual physical space.

Advantages:

  • Storage Efficiency: Only allocates space based on actual usage.
  • Flexibility: Permits dynamic space allocation and resizing.
  • Space-efficient Snapshots: Thin provisioning optimizes snapshot storage consumption.

Disadvantages:

  • Monitoring Requirement: Requires constant oversight to prevent running out of actual space.
  • Potential Performance Issues: May have overhead in certain workloads due to management and block allocation processes.

And to finish for now… how can I create a new volume on unallocated (VFree) space on a Volume Group?

When I did the setup for Proxmox, when setting up the disk I clicked on “Options” and changed the “minfree” option to 800.

That allowed the Proxmox installer to leave 800GB of unallocated space on the Volume Group “pve”, which is the sole default one. So, it used the remaining space for its installation and data storage (e.g., VMs, containers), but it didn’t touch the 800GB you’ve reserved.

My plan was to use this 800GB space to create a new logical volume named “nfs”, which I would format to EXT4 and mount on fstab so to be able to create NFS shares to expose on my local network.

Here are the steps I took to do that:

  1. Created a New Logical Volume:

I created a new logical volume of 750GB:

sudo lvcreate -L 750G -n nfs pve - this command created a logical volume named nfs of size 750GB in the pve volume group.

After finishing I discovered I it could be at least 45GB more, but that unallocated space may prove useful in the future, so I will keep it that way for now.

  1. Formatted the Logical Volume to EXT4: sudo mkfs.ext4 /dev/pve/nfs

  2. Created the Mount Point: sudo mkdir /nfs

  3. Mounted the Logical Volume: sudo mount /dev/pve/nfs /nfs

  4. Updated /etc/fstab to mount on boot:

To ensure that my logical volume mounted automatically at boot, I got the UUID of the new logical volume: sudo blkid /dev/pve/nfs

This gave me an output like: /dev/pve/nfs: UUID="some-long-number" TYPE="ext4"

Then I edited /etc/fstab and added a line at the end of the file: UUID=some-long-number /nfs ext4 defaults 0 2

  1. Tested the Setup, simulating a remount process:
sudo umount /nfs
sudo mount -a

Since there were no errors, the logical volume was correctly set up and would mount at /nfs on every boot.