Skip to main content

KVM - Fast ways to spin up VMs - Virt-Sysprep

I'm using plain KVM + Libvirt as my hypervisor of choice in my Homelab since it gives me a lot of flexibility, reliability and performance. Installing VMs using traditional installers allows for customizations during install but if all you're doing is quickly spinning up a VM to test something, pre-built Cloud Images are probably a better choice. 
The Cloud Images can be customized though before importing them using tools like virt-sysprep or cloud-init.

In this Article, I'll be covering my workflow using virt-sysprep with a Alma Cloud Image although any other cloud image should work.
 [root@hyv02 ~]# curl -4 -f -k -L -Z -o '/var/kvm/nfs-vm-templates/almalinux-9-2025-05-22-x86_64.qcow2' -X 'GET' -H 'Accept: application/octet-stream' https://raw.repo.almalinux.org/almalinux/9/cloud/x86_64/images/AlmaLinux-9-GenericCloud-9.6-20250522.x86_64.qcow2  
 [root@hyv02 ~]# chown root:root /var/kvm/nfs-vm-templates/almalinux-9-2025-05-22.x86_64.qcow2; chmod 600 /var/kvm/nfs-vm-templates/almalinux-9-2025-05-22.x86_64.qcow2  
You can now clone the cloud-image and create a new qcow2 image using it as a reference for the new VM:
 [root@hyv02 ~]# qemu-img create -b /var/kvm/nfs-vm-templates/almalinux-9-2025-05-22-x86_64.qcow2 -f qcow2 -F qcow2 /var/kvm/nfs-vm-data/almalinux-virt-sysprep.qcow2 10G  
This will now create a new Image using the Cloud Image as basis for the ephemeral testing vm.

Now, let's run virt-sysprep to do the following tasks:
  • set the hostname
  • set the timezone
  • set a root password
  • create a user and group with custom uid/gid
  • modify sudo to allow passwordless sudo for the newly created user
  • since we're using a SELinux system, disable SELinux temporarily for firstboot to run
  • set a firstboot command to enable selinux, relabel the whole system and reboot
  • also inject a few SSH Keys that I would like to add to my user's authorized_keys
 [root@hyv02 ~]# virt-sysprep -c 'qemu:///system' --format 'qcow2' --add '/var/kvm/nfs-vm-data/almalinux-virt-sysprep-disk1.qcow2' \  
     --hostname 'almalinux-virt-sysprep' --timezone 'Europe/Prague' --root-password 'password:provisioned-by-virt-sysprep' --password-crypto 'sha512' \  
     --run-command 'groupadd -g "10117" archy' \  
     --run-command 'useradd -m -c "privisioned by virt-sysprep" -d "/home/archy" -g "archy" -s "/bin/bash" -u "10117" archy" \  
     --run-command 'echo -e "archy ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers.d/archy' \  
     --run-command 'sed -i /etc/selinux/config -E -e "s/^SELINUX=.*/SELINUX=permissive/g"' \  
     --firstboot-command 'sed -i /etc/selinux/config -E -e "s/^SELINUX=.*/SELINUX=enforcing/g"; touch /.autorelabel; systemctl reboot' \  
     --ssh-inject 'archy:string:ssh-ed25519 AAAAC3Nza1...' \  
     --ssh-inject 'archy:string:ssh-ed25519 AAAAC3Nza2...' \  
     --ssh-inject 'archy:string:ssh-ed25519 AAAAC3Nza3...' \  
     --selinux-relabel --colors --no-network  
The flags '--network|--no-network' tell virt-sysprep to either enable or disable network connections during sysprepping.  

After the command finished, the image is now ready to be imported using tools like virt-install, virt-manager or cockpit-machines.

An Example of installing the VM using virt-install could look like this:
 [root@hyv02 ~]# virt-install --import --hvm --noautoconsole \  
   --connect 'qemu:///system' \  
   --name 'almalinux-virt-sysprep' \  
   --memory '1024' \  
   --vcpus '2' \  
   --cpu 'host-model' \  
   --controller 'type=virtio-serial' \  
   --disk '/var/kvm/nfs-vm-data/almalinux-virt-sysprep-disk1.qcow2,bus=virtio' \  
   --network 'bridge=br-internal,model=virtio' \  
   --arch 'x86_64' \  
   --machine 'q35' \  
   --os-variant 'almalinux9' \  
   --rng '/dev/urandom'  
This will create a vm named 'almalinux-virt-sysprep' using the disk we just prepared with 1024MB of RAM and 2 CPU Cores. The disk will be attached as VirtIO Device.

Feel free t commend and / or suggest a topic.

Comments

Popular posts from this blog

Dynamic DNS with BIND and ISC-DHCP

I personally prefer to work with hostnames instead of ip-addresses. If you have anything like freeipa or active directory, it will do that for you by registering the client you added to your realm to the managed dns and edit the records dynamically. We can achieve the same goal with just bind and isc-dhcp. I'll use a raspberry pi with raspbian 9 for this setup. So here is a quick tutorial on how to configure the isc-dhcp-server to dynamically update bind. First set a static ip to your server. [archy@ddns ~]$ sudo vim /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) # Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf' # Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d auto eth0 iface eth0 inet static address 172.31.30.5 network 172.31.30.0 broadcast 172.31.30.255 netmask 255.255.255.0 ...

Push logs and data into elasticsearch - Part 2 Mikrotik Logs

This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs. Prerequesites: You'll need a working Elasticsearch Cluster with Logstash and Kibana. Start by getting the Log Data you want to structure parsed correctly. Mikrotik Logs are a bit difficult since they show you Data in the interface which is already enriched with Time / Date. That means a message that the remote logging will send to Logstash will look like this: firewall,info forward: in:lan out:wan, src-mac aa:bb:cc:dd:ee:ff, proto UDP, 172.31.100.154:57061->109.164.113.231:443, len 76 You can check them in the grok debugger and create your own filters and mapping. The following is my example which might not fit your needs. Here are some custom patterns I wrote for my pattern matching: MIKROTIK_DATE \b(?:jan(?:uary)?|feb(?:ruary)?|mar(?:ch)?|apr(?:il)?|may|jun(?:e)?|jul(?...

LACP-Teaming on CentOS 7 / RHEL 7

What is teaming? Teaming or LACP (802.3ad) is a technique used to bond together multiple interfaces to achieve higher combined bandwith. NOTE: every clients speed can only be as high as the single link speed of one of the members. That means, if the interfaces I use in the bond have 1 Gigabit, every client will only have a maximum speed of 1 Gigabit. The advantage of teaming is, that it can handle multiple connections with 1 Gigabit. How many connections depends on the amount of your network cards. I'm using 2 network cards for this team on my server. That means I can handle 2 Gigabit connections at full rate on my server provided the rest of the hardware can deliver that speed. There also exists 'Bonding' in the Linux world. They both do the same in theory but  for a detailed comparison check out this  article about teaming in RHEL7 . To create a teaming-interface, we will first have to remove all the interface configurations we've done on the (soon to be) sla...

FreeIPA - Integrating your DHCPD dynamic Updates into IPA

I recently went over my network configuration and noticed that the dhcp-leases were not pushed into the IPA-DNS yet. So I thought, why not do it now. The setup is very similar to setting it up on a single bind instance not managed by IPA (I've already written a guide about this here ). My setup is done with the following hosts: ipa01.archyslife.lan - 172.31.0.1 inf01.archyslife.lan - 172.31.0.5 First of all, create a rndc-key: [archy@ipa01 ~]$ sudo rndc-confgen -a -b 512 This will create the following file '/etc/rndc-key' [archy@ipa01 ~]$ sudo cat /etc/rndc.key key "rndc-key" { algorithm hmac-md5; secret "secret_key_here=="; }; We also need to make named aware of the rndc-key and allow our remote dhcp server to write dns entries: [archy@ipa01 ~]$ sudo vim /etc/named.conf ... include "/etc/rndc-key"; controls { inet 172.31.0.1 port 953 allow { 172.31.0.5; } keys ...

Creating a pgpool-II based PostgreSQL Cluster

This time I'm going to do a small and quick walkthrough for a postgresql cluster install. I assume you have a clean install of CentOS 7.3 with all updates. The configuration itself is surprisingly simple. The enviroment I'm working with is: Node1: Hostname: pgsql01.archyslife.lan IP: 172.31.10.31 Member of IPA-Domain Selinux: enforcing Node2: Hostname: pgsql02.archyslife.lan IP: 172.31.10.32 Member of IPA-Domain Selinux: enforcing Cluster: Main Node: pgsql01.archyslife.lan Replica: pgsql02.archyslife.lan Virtual IP: 172.31.10.33 for the sake completeness I'll be adding a A-Record entry in the IPA-DNS. Let's start with the configuration of each node. First I will completely setup the Master without restarting the services, afterwards the replica will follow. Steps necessary for both nodes. Add the pgsql-repo to yum. [archy@pgsql01 ~]$ sudo yum -y install https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-centos96-9.6...