[archy@hyv01 ~]$ sudo yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install tuned [archy@hyv01 ~]$ sudo systemctl enable libvirtd.service [archy@hyv01 ~]$ sudo systemctl start libvirtd.service
[archy@hyv01 ~]$ sudo tuned-adm profile virtual-host
NOTE: Tuned is optional but might squeeze out a little bit more performance and optimize the host for virtualization workloads. More info can be found by using 'man tuned-profiles'.
Next, the network Configuratin. I'll use 4 Interfaces for the bond itself which will then again be a slave interface to the bridge which the VMs on this host will connect to.
The Network Interfaces for the Bond will be:
- ens2f0
- ens2f1
- ens2f2
- ens2f3
[archy@hyv01 ~]$ sudo nmcli connection add type bond con-name bond0 ifname bond0 mode 802.3ad
[archy@hyv01 ~]$ sudo nmcli connection add type ethernet con-name bond0-ens2f0 ifname ens2f0 master bond0
[archy@hyv01 ~]$ sudo nmcli connection add type ethernet con-name bond0-ens2f1 ifname ens2f1 master bond0
[archy@hyv01 ~]$ sudo nmcli connection add type ethernet con-name bond0-ens2f2 ifname ens2f2 master bond0
[archy@hyv01 ~]$ sudo nmcli connection add type ethernet con-name bond0-ens2f3 ifname ens2f3 master bond0
[archy@hyv01 ~]$ sudo nmcli connection add type bridge con-name br0 ifname br0
[archy@hyv01 ~]$ sudo nmcli connection mod bond0 connection.master br0 connection.slave-type bridge
[archy@hyv01 ~]$ sudo nmcli connection mod br0 ipv4.address 172.31.0.250/24
[archy@hyv01 ~]$ sudo nmcli connection mod br0 ipv4.dns 9.9.9.9
[archy@hyv01 ~]$ sudo nmcli connection mod br0 +ipv4.dns 1.1.1.1
[archy@hyv01 ~]$ sudo nmcli connection mod br0 ipv4.gateway 172.31.0.254
You can raise the Interfaces yourself or restart the network.service. Note: this might take a few seconds since the dependencies of all interfaces is quite large. Next up, adding VLANs:
[archy@hyv01 ~]$ sudo nmcli connection add type vlan con-name bond0.100 ifname bond0.100 dev bond0 id 100
[archy@hyv01 ~]$ sudo nmcli connection add type bridge ifname br0.100 con-name br0.100
[archy@hyv01 ~]$ sudo nmcli connection mod bond0.100 connection.master br0.100 connection.slave-type bridge
[archy@hyv01 ~]$ sudo nmcli connection mod br0.100 ipv4.method link-local
[archy@hyv01 ~]$ sudo nmcli connection mod br0.100 ipv6.method ignore
I will not give this VLAN-Interface any IP-Address since it's not supposed to be reachable over IP.
Next up, create the storage pools
I'd recommend to restart the server, just to verify if everything works as expected at reboot.
If everything works as expected, let's continue with storage pools and disk setup.
If you are using a second disk (which I highly encourage you to) first set up the physical volumes and volume groups.
I have already created a partition on my second disk (/dev/sdb) with type 8e if you are using fdisk or 8e00 if you are using gdisk.
Create the physical volume, volume groups and logical volumes:
[archy@hyv01 ~]$ sudo pvcreate /dev/sdb1
[archy@hyv01 ~]$ sudo vgcreate vg_kvmstore /dev/sdb1
[archy@hyv01 ~]$ sudo lvcreate -n lv_vm_iso -L 100G vg_kvmstore
[archy@hyv01 ~]$ sudo lvcreate -n lv_vm_images -L 1T vg_kvmstore
[archy@hyv01 ~]$ sudo mkfs.xfs /dev/vg_kvmstore/lv_vm_iso
[archy@hyv01 ~]$ sudo mkfs.xfs /dev/vg_kvmstore/lv_vm_images
Now that the storage is set up, make sure it's entered in the fstab correctly. I'll use the following lines:
/dev/mapper/vg_kvmstore-lv_vm_iso /srv/kvm/vm-iso xfs defaults 0 0
/dev/mapper/vg_kvmstore-lv_vm_images /srv/kvm/vm-images xfs defaults 0 0
and mount them all:
[archy@hyv01 ~]$ sudo mount -a
If you are using SELinux, you'll also have to set the context for each path:
[archy@hyv01 ~]$ sudo semanage fcontext -a -t virt_content_t '/srv/kvm/vm-iso(/.*)?'
[archy@hyv01 ~]$ sudo semanage fcontext -a -t virt_image_t '/srv/kvm/vm-images(/.*)?'
[archy@hyv01 ~]$ sudo restorecon -Rv /srv/kvm
[archy@hyv01 ~]$ sudo chown -R qemu:eqmu /srv/kvm
[archy@hyv01 ~]$ sudo chmod -R 1755 /srv/kvm
You can now go ahead and sync any ISOs to the location. I'll use rsync for task:
[archy@hyv01 ~]$ sudo rsynv -vrlptgoDxh --progress -e ssh archy@strgnas.archyslife.lan:/storage/data/iso-files/ /srv/kvm/vm-iso/
If you have a linux client that runs virt-manager and want to manage your hypervisor that way, you'll have to export your public key to the user permitted to manage libvirt. I'll use the root-user for simplicity.
[archy@stealth-falcon ~]$ ssh-copy-id root@hyv01.archyslife.lan
Password:
Now open the Virt-Manager Application and Click on
File --> Add Connection
--> Set Hypervisor to 'QEMU/KVM'
--> Tick the box that says 'connect to remote host over SSH'
--> Set the Username to 'root'
--> Set the Hostname to your hypervisor's hostname, in my case 'hyv01.archyslife.lan'
--> Tick the box that says 'autoconnect'
--> Click on 'Connect' and you're done.
From the Virt-Manager GUI you can define your storage pools. I'll use the GUI for that since it's easier and faster than to define your XML-File and running virsh pool-define, virsh pool-create and virsh pool-autostart. But that's just my favor. Anyway, that completes my setup for a headless KVM Hypervisor with Bridging, Bonding and LVM.
Feel free to comment and / or suggest any topics.
Comments
Post a Comment