Skip to main content

KVM - Headless Server setup with Bonding, Bridging and LVM on AlmaLinux 8


Since stable releases have been available for Rocky and Alma for quite some time, I've decided to write quick step-by-step guide to get a kvm-hypervisor set up on EL8.

I'll start with a minimal install of AlmaLinux 8 with the latest updates applied.

First, install the required packages to make the host a hypervisor:

 [archy@hyv01 ~]$ sudo dnf -d 2 -y --refresh module enable virt
 [archy@hyv01 ~]$ sudo dnf -d 2 -y --refresh install qemu-kvm libvirt libguestfs-tools virt-install tuned swtpm cockpit cockpit-machines 
 [archy@hyv01 ~]$ sudo systemctl enable --now libvirtd.service tuned.service 
 [archy@hyv01 ~]$ sudo tuned-adm profile virtual-host  

NOTE: Tuned is optional but might give you just a little bit more optimization for your workload.

Next up, network configuration. I'll create a bond with 4 NICs which can then be used for vlans and bridges.

 [archy@hyv01 ~]$ sudo nmcli connection add type bond con-name bond0 ifname bond0 mode 802.3ad   
 [archy@hyv01 ~]$ sudo nmcli connection add type ethernet con-name bond0-ens2f0 ifname ens2f0 master bond0  
 [archy@hyv01 ~]$ sudo nmcli connection add type ethernet con-name bond0-ens2f1 ifname ens2f1 master bond0  
 [archy@hyv01 ~]$ sudo nmcli connection add type ethernet con-name bond0-ens2f2 ifname ens2f2 master bond0  
 [archy@hyv01 ~]$ sudo nmcli connection add type ethernet con-name bond0-ens2f3 ifname ens2f3 master bond0  
 [archy@hyv01 ~]$ sudo nmcli connection add type bridge con-name br0 ifname br0   
 [archy@hyv01 ~]$ sudo nmcli connection mod bond0 connection.master br0 connection.slave-type bridge  
 [archy@hyv01 ~]$ sudo nmcli connection mod br0 ipv4.address 172.31.0.250/24  
 [archy@hyv01 ~]$ sudo nmcli connection mod br0 ipv4.dns 9.9.9.9  
 [archy@hyv01 ~]$ sudo nmcli connection mod br0 +ipv4.dns 1.1.1.1  
 [archy@hyv01 ~]$ sudo nmcli connection mod br0 ipv4.gateway 172.31.0.254  
 [archy@hyv01 ~]$ sudo nmcli connection add type vlan con-name bond0.100 ifname bond0.100 dev bond0 id 100  
 [archy@hyv01 ~]$ sudo nmcli connection add type bridge br0.100 con-name br0.100  
 [archy@hyv01 ~]$ sudo nmcli connection mod bond0.100 connection.master br0.100 connection.slave-type bridge  
 [archy@hyv01 ~]$ sudo nmcli connection mod br0.100 ipv4.method link-local  
 [archy@hyv01 ~]$ sudo nmcli connection mod br0.100 ipv6.method ignore  

I'm not going to give the Interface an IP Address since it's not supposed to be reachable from the network.

Now, let's create storage pools. Start by creating the mountpoints and logical volumes

 [archy@hyv01 ~]$ sudo mkdir -p /var/kvm/vm-images  
 [archy@hyv01 ~]$ sudo mkdir /var/kvm/vm-iso  
 [archy@hyv01 ~]$ sudo pvcreate /dev/sdb1  
 [archy@hyv01 ~]$ sudo vgcreate vg_data /dev/sdb1  
 [archy@hyv01 ~]$ sudo lvcreate -n lv_vm_images -L 2T vg_data  
 [archy@hyv01 ~]$ sudo lvcreate -n lv_vm_iso -L 100G vg_data  
 [archy@hyv01 ~]$ sudo mkfs.xfs /dev/vg_data/lv_vm_images  
 [archy@hyv01 ~]$ sudo mkfs.xfs /dev/vg_data/lv_vm_iso  

With the volumes and mountpoints ready to go, persist them in fstab. I'll use these entries:

 /dev/mapper/vg_data-lv_vm_images /var/kvm/vm-images xfs defaults 0 0  
 /dev/mapper/vgt_data-lv_vm_iso /var/kvm/vm-iso xfs defaults 0 0  

With the fstab finished, everything should be mountable:

 [archy@hyv01 ~]$ sudo mount -a  

If you are using SELinux, which I highly recommend, set the appropriate context for each path:

 [archy@hyv01 ~]$ sudo semanage fcontext -a -t virt_content_t '/var/kvm/vm-iso(/.*)?'  
 [archy@hyv01 ~]$ sudo semanage fcontext -a -t virt_image_t '/var/kvm/vm-images(/.*)?'  
 [archy@hyv01 ~]$ sudo restorecon -Rv /var/kvm  
 [archy@hyv01 ~]$ sudo chown -R qemu.qemu /var/kvm  
 [archy@hyv01 ~]$ sudo chmod -R 1755 /var/kvm  

Now let's create the actual story pools using virsh:

 [archy@hyv01 ~]$ sudo virsh pool-define-as --name 'vm-images' --type dir --target '/var/kvm/vm-images'  
 [archy@hyv01 ~]$ sudo virsh pool-define-as --name 'vm-iso' --type dir --target '/var/kvm/vm-iso'  
 [archy@hyv01 ~]$ sudo virsh pool-autostart --pool 'vm-images'  
 [archy@hyv01 ~]$ sudo virsh pool-autostart --pool 'vm-iso'  
 [archy@hyv01 ~]$ sudo virsh pool-start --pool 'vm-images'  
 [archy@hyv01 ~]$ sudo virsh pool-start --pool 'vm-iso'  

Now for the finishing touch, copy your ssh-key to the server in order to ensure password-less authentication for ssh:

 [archy@stealth-falcon ~]$ ssh-copy-id root@hyv01.archyslife.lan  

You should now be able to connect using virt-manager and virsh from your local workstation to the server without being asked for a password.

As a GUI, I'd recommend either virt-manager or using cockpit which can be enabled by this command:

 [archy@hyv01 ~]$ sudo systemctl enable --now cockpit.socket  

This way you have a decent WebUI running on your server on port 9090 which can be used to manage VMs.

Feel free to comment and / or suggest a topic.

Comments

Popular posts from this blog

Dynamic DNS with BIND and ISC-DHCP

I personally prefer to work with hostnames instead of ip-addresses. If you have anything like freeipa or active directory, it will do that for you by registering the client you added to your realm to the managed dns and edit the records dynamically. We can achieve the same goal with just bind and isc-dhcp. I'll use a raspberry pi with raspbian 9 for this setup. So here is a quick tutorial on how to configure the isc-dhcp-server to dynamically update bind. First set a static ip to your server. [archy@ddns ~]$ sudo vim /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) # Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf' # Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d auto eth0 iface eth0 inet static address 172.31.30.5 network 172.31.30.0 broadcast 172.31.30.255 netmask 255.255.255.0

LACP-Teaming on CentOS 7 / RHEL 7

What is teaming? Teaming or LACP (802.3ad) is a technique used to bond together multiple interfaces to achieve higher combined bandwith. NOTE: every clients speed can only be as high as the single link speed of one of the members. That means, if the interfaces I use in the bond have 1 Gigabit, every client will only have a maximum speed of 1 Gigabit. The advantage of teaming is, that it can handle multiple connections with 1 Gigabit. How many connections depends on the amount of your network cards. I'm using 2 network cards for this team on my server. That means I can handle 2 Gigabit connections at full rate on my server provided the rest of the hardware can deliver that speed. There also exists 'Bonding' in the Linux world. They both do the same in theory but  for a detailed comparison check out this  article about teaming in RHEL7 . To create a teaming-interface, we will first have to remove all the interface configurations we've done on the (soon to be) sla

Push logs and data into elasticsearch - Part 2 Mikrotik Logs

This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs. Prerequesites: You'll need a working Elasticsearch Cluster with Logstash and Kibana. Start by getting the Log Data you want to structure parsed correctly. Mikrotik Logs are a bit difficult since they show you Data in the interface which is already enriched with Time / Date. That means a message that the remote logging will send to Logstash will look like this: firewall,info forward: in:lan out:wan, src-mac aa:bb:cc:dd:ee:ff, proto UDP, 172.31.100.154:57061->109.164.113.231:443, len 76 You can check them in the grok debugger and create your own filters and mapping. The following is my example which might not fit your needs. Here are some custom patterns I wrote for my pattern matching: MIKROTIK_DATE \b(?:jan(?:uary)?|feb(?:ruary)?|mar(?:ch)?|apr(?:il)?|may|jun(?:e)?|jul(?

FreeIPA - Integrating your DHCPD dynamic Updates into IPA

I recently went over my network configuration and noticed that the dhcp-leases were not pushed into the IPA-DNS yet. So I thought, why not do it now. The setup is very similar to setting it up on a single bind instance not managed by IPA (I've already written a guide about this here ). recently went over my network configuration and I noticed that I've never put my My setup is done with the following hosts: ipa01.archyslife.lan - 172.31.0.1 inf01.archyslife.lan - 172.31.0.5 First of all, create a rndc-key: [archy@ipa01 ~]$ sudo rndc-confgen -a -b 512 This will create the following file '/etc/rndc-key' [archy@ipa01 ~]$ sudo cat /etc/rndc.key key "rndc-key" { algorithm hmac-md5; secret "secret_key_here=="; }; We also need to make named aware of the rndc-key and allow our remote dhcp server to write dns entries: [archy@ipa01 ~]$ sudo vim /etc/named.conf ... include "/etc/rndc-key&quo

SSSD - Debugging PAM permission denied

Sometimes there's weird errors in IT that occur on random chance. I've had such an encounter with SSSD in combination with IPA(+AD-Trust) recently, where only sometimes, a connection to one of the IPA-Servers would fail with this error: Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: pam_sss(sshd:account): Access denied for user runner: 4 (System error) Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: fatal: Access denied for user runner by PAM account configuration [preauth] In my case, it was only happening sometimes when running a basic system setup role using ansible on every host in the entire environment. This way, there was no consistent pattern besides being the same host every time if it failed. First up, add the 'debug_level=X' to every section required in the /etc/sssd/sssd.conf where X is a number from 1 to 10 with 10 being the most verbose. Afterward, restart sssd and check the logs for any obvious problems. 1) If you are using local users, check the