I've been working on building out my virtual host servers, and figuring out how to configure them to allow building a Kubernetes (k8s) cluster, as well as using LXD containers for some VMs (instead of xen/kvm).
I'm running a cluster of host servers. I've always run RAID10 (mirrored pairs, then striped).
- 3x Dell PowerEdge R610
- 64GB RAM
- Six-drive RAID10 (hardware), 120GB SSDs
- Dual Intel Xeon X5650 @ 2.67GHz (24 cores)
- /dev/sda partitioned into 40G OS, 4G swap, the remainder (292GB) as LVM (/dev/sda5)
Configure Physical Volume
All three servers are configured/partitioned identically, with the LVM partition as /dev/sda5. Because I setup the LVM partition during installation of Ubuntu 16.04, I am skipping the fdisk/gpart steps to create an LVM partition.
The first thing we need to do is create the physical volume specifying the LVM partition.
Create Volume Group
Next up, add the physical volume /dev/sda5 to a new volume group vg1.
Disclaimer: the following information may be incorrect, but I'm fairly confident my understanding is correct.
Using the vgcreate command, we'll create a new volume group named vg1 using the physical volume /dev/sda5, and we are passing 64M as the physical extend size.
My understanding of the -s 64M is that this specifies the size to allocate to the logical volumes created within this volume create every time it needs to expand the size. Considering I am creating a few LXD containers as servers, I am saying to allocate 64M at a time. If I had tons of small partitions I think I would probably keep the 4M default.
vgcreate -s 64M vg1 /dev/sda5
Now that we have a volume group vg1 created, we can create a thinpool, which will be used to create thin-provisioned logical volumes.
I am specifying a size of --size 292G because my volume group is 292.75G.
Honestly I'm not sure what --chunksize 1M does exactly, I came across it in another google page. The default is 64k.
lvcreate --type thin-pool --size 292G --chunksize 1M --thinpool tp1 vg1
Validating physical volume, volume group, and thin pool
Check the physical volume
root@s6:~# pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name vg1 PV Size 292.77 GiB / not usable 20.00 MiB Allocatable yes (but full) PE Size 128.00 MiB Total PE 2342 Free PE 0 Allocated PE 2342 PV UUID FqSgcb-qqX0-TaMb-WDFp-FQDn-vm6V-lXtHTf
Check the volume groups, which should only include the newly created vg1.
root@s6:~# vgdisplay --- Volume group --- VG Name vg1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 292.75 GiB PE Size 128.00 MiB Total PE 2342 Alloc PE / Size 2342 / 292.75 GiB Free PE / Size 0 / 0 VG UUID 0Iqkkc-pTA6-pBkt-zGwB-zTYC-VWLW-napw0f
Check the logical volumes, which should only include the newly created thin pool tp1.
root@s6:~# lvdisplay --- Logical volume --- LV Name tp1 VG Name vg1 LV UUID E69O7g-wzvP-6Nxx-49lp-IZK9-IbqK-wOINQJ LV Write Access read/write LV Creation host, time s6, 2017-06-03 14:03:34 -0700 LV Pool metadata tp1_tmeta LV Pool data tp1_tdata LV Status available # open 0 LV Size 292.50 GiB Allocated pool data 0.00% Allocated metadata 0.09% Current LE 2340 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:2
Create thin-provisioned volume
lvcreate --thinpool vg1/tp1 --name volume1 --virtualsize 1G
Now check the logical volumes with lvs
root@s6:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert tp1 vg1 twi-aotz-- 292.00g 0.00 0.18 volume1 vg1 Vwi-a-tz-- 1.00g tp1 0.00
We now have a thin-provisioned logical volume volume1.