Skip to main content

Posts

Showing posts from March, 2024

OCP4 - operator cache requires rebuild

After upgrading my OKD cluster to 4.15.0-0.okd-2024-03-10-010116, I noticed that the certified-operators pod in the namespace 'openshift-marketplace' was constantly crashing. Digging a little deeper, I found this to be the root cause: [archy@helper01 ~]$ oc -n openshift-marketplace logs pod/certified-operators-ckpbk -c registry-server time="2024-03-22T07:54:45Z" level=info msg="starting pprof endpoint" address="localhost:6060" time="2024-03-22T07:54:45Z" level=fatal msg="cache requires rebuild: cache reports digest as \"e1eec9cd4a58db98\", but computed digest is \"8fcd2934467d9214\"" I fixed this by disabling and re-enabling the operator source (certified-operators in my case): [archy@helper01 ~]$ oc edit operatorhubs.config.openshift.io/cluster ... spec: disableAllDefaultSources: true sources: - disabled: false name: community-operators - disabled: true

Podman - Change container image storage location

Depending on the amount of images and the size of each image, the backing storage requirements can grow quite large. I'll reconfigure podman to use a new mountpoint to use as backing storage for container images. First, some disk-configuration is required. I'll be using logical volumes for easy volume management: [root@podman02 ~]# pvcreate /dev/vdb [root@podman02 ~]# vgcreate vg_data /dev/vdb [root@podman02 ~]# lvcreate -n lv_var_podman -L 20G [root@podman02 ~]# mkfs -t xfs /dev/vg_data/lv_var_podman [root@podman02 ~]# cat << EOF >> /etc/fstab /dev/vg_data/lv_var_podman /var/podman xfs defaults 0 0 EOF [root@podman02 ~]# systemctl daemon-reload [root@podman02 ~]# mount -a Now, let's reconfigure the storage setting for a user. I'll be using my user as an example. [root@podman02 ~]# mkdir -p -m 750 /var/podman/archy [root@podman02 ~]# chown -R archy:archy /var/podman/archy [root@podman02 ~]# sudo -Hiu archy [archy@podman02