Skip to main content

Posts

Showing posts from August, 2023

OCP4 - configure chrony

When installing OpenShift or OKD4, all nodes will be using a default chrony config which doesn't necessarily work for all environments, such as firewalled-environments for example. Here's a quick how-to on how to create a custom /etc/chrony.conf for all nodes in your OpenShift Cluster: There are some prerequisites however. Butane is required as well as access to the openshift cluster along with administrative permissions. Start by downloading the 'butane' binary from github: [root@helper01 ~]# BUTANE_VERSION='v0.18.0' [root@helper01 ~]# curl -4kLo '/usr/bin/butane' -X GET "https://github.com/coreos/butane/releases/download/${BUTANEVER}/butane-x86_64-unknown-linux-gnu" [root@helper01 ~]# chown root:root /usr/bin/butane [root@helper01 ~]# chmod 755 /usr/bin/butane Create the two butane configs, first the master nodes: [root@helper01 ~]# cat 99-master-chrony.bu variant: openshift version: 4.13.0 metadata: name: 99-master-chr

Ansible Automation Platform - include directories on filesystem to execution environments

Ansible provides the ability to source additional files containing variables using the 'vars_files' option in a play. This is not available by default when using execution environments but there's a easy fix to that problem which hides in the settings. The option is named 'Paths to expose to isolated jobs', which takes podman-style mount paths that allows to mount directories and files into the execution environment. I would recommend to configure / modify any setting in Ansible Automation Platform / AWX using the ansible collections. Here's a quick playbook to deploy the setting: --- - hosts: localhost become: false gather_facts: false vars: aap_procotol: https aap_hostname: "{{ groups['ansible_tower'][0] }}" aap_admin_user: admin aap_admin_pass: "{{ vault['aap']['admin_password'] }}" tasks: - name: create ~/.tower_cli.cfg ansible.builtin.copy:

Ansible Automation Platform - tower-processes:awx-uwsgi cannot start

So I'm currently in the process of migrating the awx deployment to aap 2.4 on vms @${dayjob}. In the process of testing a single-node deployment, I found that the services that make up aap don't properly start resulting in a WebUI that displays '502 Bad Gateway'. So here's a quick fix on how to get it working again: Check the status using: [root@controller ~]# automation-controller-service status If every service is up and running, continue to check the processes using 'supervisorctl': [root@controller ~]# supervisorctl status Using that command, I found the 'tower-processes:awx-uwsgi' to be in constant 'STARTING' state and hammering the CPU. Here's a fix that I found to be working which requires to re-deploy the machine using the 'setup.sh' script. Rerunning the setup will not delete or destroy any configurations or data, so it might be a possible solution to a lot of issues. Start by stopping the ansible controller: [root