Skip to main content

Posts

Showing posts from April, 2020

Satellite6 - One way to fix a broken installation

Sometimes, things don't go as planned and, so did a monthly patching job where I work at. Long story short, the satellite installation was broken with packages from 6.7 (Foreman 1.24) while Satellite was upgraded the way it's supposed to be done ... so it was partially broken. I found these few commands to fix a lot of problems with satellite (or foreman for that matter), so I would consider them quite helpful. First, run the installer with the respective scenario and tell it to upgrade. I'll run all of this in a tmux session just in case if I loose the connection or anything else happens to my workstation. [root@satellite ~]# tmux new -s fix_satellite [root@satellite ~]# foreman-maintain service restart [root@satellite ~]# foreman-installer --scenario satellite --upgrade This might take a while. Once the installer has (hopefully) successfully finished, continue to clean up old tasks that might cause problems. [root@satellite ~]# foreman-rake foreman_

AWX - Deploy to dedicated network on a docker host.

By default, the installer for AWX will create the awx_default network (bridge) in docker and will provide a network for that bridge (such as 172.18.0.0/16). Under certain circumstances, it can happen that these networks overlap and you might need to adjust this network. One solution is to create the network beforehand before starting the installer with your dedicated container network A note on AWX: Versions are not inter-upgradeable so it's best practice to have your job_templates, projects, inventories, credentials and so on in a dedicated git repository where you can easily deploy it using ansible or any other configuration management tool. I've also had bad luck with backing up the config and restoring it.  If you have already provisioned your AWX, you'll need to start by stopping and removing the containers as well as the networks. [root@awx ~]# docker container stop awx_task awx_web awx_redis awx_memcached awx_postgresql [root@awx ~]# docker containe

Satellite6 - Upgrade to 6.7 fails while starting foreman-proxy.service

If you are using the external dhcp with satellite 6, the upgrade might fail at starting the foreman-proxy service again with this error: Apr 16 14:44:55 satellite.example.com[11733]: /usr/share/foreman-proxy/Gemfile.in:12:in `instance_eval': You cannot specify the same gem twice with different version requirements. (Bundler::GemfileError) Apr 16 14:44:55 satellite.example.com[11733]: You specified: rsec (< 1) and rsec (>= 0) There are two workarounds to this problem. The first solution is editing the file /usr/share/foreman-proxy/bundler.d/dhcp_remote_isc.rb and add the '<1' to the gem 'rsec' line [root@satellite ~]# vim /usr/share/foreman-proxy/bundler.d/dhcp_remote_isc.rb group :dhcp_remote_isc do gem 'rsec', '<1' end gem 'smart_proxy_dhcp_remote_isc' The second solution would be to remove the '<1' from each gem 'rsec' line [root@satellite ~]# vim /usr/share/foreman-proxy/bun

Satellite6 - Upgrade to 6.7 fails with 'foreman-docker-remove-foreman-docker'

Since RedHat Satellite 6.7 has been released on 14th April and naturally it's time to upgrade. Under certain circumstances, the upgrade fails at 'foreman-docker-remove-foreman-docker' step which is caused by the foreman-docker package. Docker is not supported with foreman version 1.24 anymore and Satellite 6.7 corresponds to foreman version 1.24.1. A list of releases as well as the foreman versions can be found  here . [root@satellite ~]# yum -y remove tfm-rubygem-foreman-docker Start the upgrade to Satellite 6.7 [root@satellite ~]# foreman-maintain upgrade run --target-version 6.7 The upgrade process should now run without any errors. Freel free to comment and / or suggest a topic.

Command Line Fu - Send and receive files using netcat

This is more of a fun thing I've done recently to send and receive files from Windows, macOS or Linux directly without the need for a Samba or NFS share. I'll be using Linux on both ends but none of the hosts is running sshd (or rsyncd for that matter), so no scp or rsync will work. Another option is to use netcat (nc) to listen to one port and send the file from your client. On the receiving side, set up netcat to listen to a specific port and redirect the output to a file. $ nc -l -p 8443 > archive.tar.xz On the sender side, create a connection to the receiver and read the input from a file. $ nc stealth-falcon.archyslife.lan 8443 < archive.tar.xz Keep in mind that this communication is not encrypted by default but can be encrypted using these options '--ssl', '--ssl-cert' and '--ssl-key' with appropriate path arguments. Feel free to comment and / or suggest a topic.

RHEL / CentOS - UUIDs differ between VMware and OS

Recently, some un-explanatory behavior with VMware and CentOS / RHEL has caught my attention because the virt-who entitlement kept failing due to different UUIDs reported by vCenter and 'dmidecode' / 'subscription-manager facts' ( VMware KB Article ). To give you an example: Hardware Version: 13 (ESXi 6.5) vCenter UUID: 4230a396-6b9c-8a0b-7502-f63969069208 Host Facts UUID: - dmidecode UUID: 96a33042-9c6b-0b8a-7502-f63969069208 - dmi.system.uuid: 96A33042-9C6B-0B8A-7502-F63969069208 - virt.uuid: 96A33042-9C6B-0B8A-7502-F63969069208 As you can see, the UUIDs from vCenter and dmidecode / subscription-manager facts do not match up due to a change in endianness. The profiles that happened to show this behavior are Hardware Version 13 (ESXi 6.5 (vmx-13)) Hardware Version 14 (ESXi 6.7 (vmx-14)) Hardware Version 15 (ESXI 6.7u2 (vmx-15)) However, with version 'Hardware Version 11 (vmx-11) everything seems to work fine. Keep this in mind for a po

CentOS - Create a mirror

If you happen to have a sufficient connection ( requirements ) and enough resources to spare, I'd encourage you to think about hosting a mirror server for one of your favorite projects. In my case, I will demonstrate the creation of a public mirror for centos in my local LAN environment. Software you will need: tmux rsync A webserver of your choice (I will use nginx) First up, let's create the basic directory structure for serving the files. My basic webroot will be /srv/mirror and the specific synced content will reside in subdirectories so that the structure looks as follows: /srv/  └── mirror    ├── centos    ├── epel    └── whatever First, create the webroot and from here on out, rsync will do the rest. [archy@repo01 ~]$ mkdir /srv/mirror [archy@repo01 ~]$ mkdir -p /etc/nginx/{sites-available,sites-enabled} I will be using rsync to sync the content every 4 hours. This will lead to this line in your crontab: 0 */4 * * * /usr/bin/rsync -v