Skip to main content

Posts

Showing posts from May, 2021

Ansible - timesync made easy

Keeping the time in sync for all your hosts with multiple major versions of one operating system can be challenging. Luckily, there's the linux-system-roles.timesync-role which will take care of pretty much everything using ansible.  First, let's add the roles/requirements.yml file and install the role: [archy@ansible01 ~]$ vim roles/requirements.yml --- - src: https://github.com/linux-system-roles/timesync.git scm: git name: linux-system-roles.timesync ... [archy@ansible01 ~]$ ansible-galaxy role install -r roles/requirements.yml -p ./roles/ --force Now with the roles being present, create a playbook that will use the role and run on every host on your inventory. I'll just extend my base-playbook which will do basic configurations on all systems: [archy@ansible01 ~]$ vim deploy_base.yml - hosts: all:!ipa[0-9][0-9].archyslife.lan user: ansible-executor become: true gather_facts: true vars: timesync_ntp_servers: ...

RedHat Satellite - RedHat Insights is not reporting back due to 'Connection Refused'

After upgrading to RedHat Satellite 6.9.1, I've had the insights-client not uploading the inventory as expected to cloud.redhat.com. Now since this was in a corporate network, outside connections had to go through a proxy. Check the proxy settings in satellite: [archy@satellite ~]$ hammer settings show --name http_proxy [archy@satellite ~]$ hammer settings show --name http_proxy_except_list In my case, they were not set which is odd since they were set before the upgrade. Anyway, just setting them again is fast. [archy@satellite ~]$ hammer settings set --name http_proxy --value 'http://proxy.lnx.domain.tld:3128' [archy@satellite ~]$ hammer settings set --name http_proxy_except_list --value "['10.0.0.0/8', 'domain.tld']" Checking in again on one of the hosts if the connection works as expected now: [archy@server ~]$ sudo insights-client --test-connection [archy@server ~]$ sudo systemctl restart insights-client.service If the host h...

Foreman - ERF42-9911 [Foreman::Exception]: Host is pending for Build

Although this error is probably rather rare, I didn't really find anything useful on the internet regarding this error so here's my shot at it I guess. The ERF42-9911 error appears when provisioning hosts and is most likely caused by the Kickstart Finish Template when calling home to the Satellite / Foreman host to tell it that the host is built. Foreman / Satellite will then cancel the build token for the host and will let it boot into the os from the installation residing on the hard drive.  If that call-home is unsuccessful, the host will stay in build mode and the Satellite / Foreman server will not cancel the build mode resulting in provisioning loops for the host. Possible cause Nr. 1: Proxy. This can happen if 'set_proxy: true' is specified in the parameters without an appropriate no_proxy parameter in place. As a result, the provisioned host will not be able to reach the Foreman / Satellite host to cancel its build mode. Possible cause Nr. 2: DHCP Fact conflicts...

LVM - Swap harddrives for a volume group

Using lvm has multiple advantages especially since hard drives can be added, and pooled. But with hard drives aging, they need to be swapped out in order to guarantee proper availability. Since with lvm, drives become irrelevant, this process is very easy. First, create the physical volume: [root@server ~]# pvcreate /dev/sdc1 Now, extend the volume group: [root@server ~]# vgextend /dev/vg_backup /dev/sdc1 With the volume group extended, the physical extents can be moved: [root@server ~]# pvmove /dev/sdb1 /dev/sdc1 This process can take a while depending on the drive speeds and sizes. Once the physical extents are moved, remove the old drive from the volume group: [root@server ~]# vgreduce /dev/vg_backup /dev/sdb1 The drives are now switched and the old drive can be removed from the system. Feel free to comment and / or suggest a topic.