Skip to main content

Foreman - Upgrade fails on step foreman-rake db:migrate with 'PG::InsufficientPrivilege: ERROR: must be owner of extension evr'

While upgrading my Foreman+Katello Server to the latest version, I've encountered the following issue:

 [archy@katello01 ~]$ sudo foreman-installer --scenario katello  
 2025-04-20 11:25:08 [NOTICE] [root] Loading installer configuration. This will take some time.  
 2025-04-20 11:25:12 [NOTICE] [root] Running installer with log based terminal output at level NOTICE.  
 2025-04-20 11:25:12 [NOTICE] [root] Use -l to set the terminal output log level to ERROR, WARN, NOTICE, INFO, or DEBUG. See --full-help for definitions.  
 2025-04-20 11:25:14 [NOTICE] [checks] System checks passed  
 2025-04-20 11:25:21 [NOTICE] [pre] The Foreman database foreman does not exist.  
 2025-04-20 11:25:21 [NOTICE] [configure] Starting system configuration.  
 2025-04-20 11:25:31 [NOTICE] [configure] 250 configuration steps out of 1939 steps complete.  
 2025-04-20 11:25:34 [NOTICE] [configure] 500 configuration steps out of 1940 steps complete.  
 2025-04-20 11:25:42 [NOTICE] [configure] 1000 configuration steps out of 1950 steps complete.  
 2025-04-20 11:25:42 [NOTICE] [configure] 1250 configuration steps out of 1952 steps complete.  
 2025-04-20 11:25:43 [NOTICE] [configure] 1500 configuration steps out of 2111 steps complete.  
 2025-04-20 11:25:43 [NOTICE] [configure] 1750 configuration steps out of 2156 steps complete.  
 2025-04-20 11:26:09 [ERROR ] [configure] '/usr/sbin/foreman-rake db:migrate' returned 1 instead of one of [0]  
 2025-04-20 11:26:09 [ERROR ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns: change from 'notrun' to ['0'] failed: '/usr/sbin/foreman-rake db:migrate' returned 1 instead of one of  
 [0]  
 2025-04-20 11:26:31 [ERROR ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]: Failed to call refresh: '/usr/sbin/foreman-rake db:migrate' returned 1 instead of one of [0]  
 2025-04-20 11:26:31 [ERROR ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]: '/usr/sbin/foreman-rake db:migrate' returned 1 instead of one of [0]  
 2025-04-20 11:27:08 [NOTICE] [configure] 2000 configuration steps out of 2156 steps complete.  
 2025-04-20 11:27:27 [NOTICE] [configure] System configuration has finished.  
 Error 1: Puppet Exec resource 'foreman-rake-db:migrate' failed. Logs:  
  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]  
   Adding autorequire relationship with User[foreman]  
   Starting to evaluate the resource (1872 of 2156)  
   Failed to call refresh: '/usr/sbin/foreman-rake db:migrate' returned 1 instead of one of [0]  
   '/usr/sbin/foreman-rake db:migrate' returned 1 instead of one of [0]  
   Evaluated in 45.77 seconds  
  Exec[foreman-rake-db:migrate](provider=posix)  
   Executing check '/usr/sbin/foreman-rake db:abort_if_pending_migrations'  
   Executing '/usr/sbin/foreman-rake db:migrate'  
   Executing check '/usr/sbin/foreman-rake db:abort_if_pending_migrations'  
   Executing '/usr/sbin/foreman-rake db:migrate'  
  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless  
   Run `bin/rails db:migrate` to update your database then try again.  
   You have 10 pending migrations:  
    20240312133027 ExtendTemplateInvocationEvents  
    20240924161240 KatelloRecreateEvrConstructs  
    20241022121706 AddSyncDependenciesOption  
    20241101144625 RemoveSystemPurposeAddons  
    20241107002541 AddRegistryURLToKatelloFlatpakRemotes  
    20241112145802 AddManifestEntityToContentFacets  
    20241120213713 AddAllowOtherTypesToContentViewErratumFilterRules  
    20241126150849 RemoveRemoteExecutionWorkersPoolSize  
    20241206183052 AddContentTypeToContainerManifestsAndLists  
    20250309121956 RenameAnsibleTowerFqdnToApiURL  
   Run `bin/rails db:migrate` to update your database then try again.  
   You have 9 pending migrations:  
    20240924161240 KatelloRecreateEvrConstructs  
    20241022121706 AddSyncDependenciesOption  
    20241101144625 RemoveSystemPurposeAddons  
    20241107002541 AddRegistryURLToKatelloFlatpakRemotes  
    20241112145802 AddManifestEntityToContentFacets  
    20241120213713 AddAllowOtherTypesToContentViewErratumFilterRules  
    20241126150849 RemoveRemoteExecutionWorkersPoolSize  
    20241206183052 AddContentTypeToContainerManifestsAndLists  
    20250309121956 RenameAnsibleTowerFqdnToApiURL  
  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns  
   rake aborted!  
   StandardError: An error has occurred, this and all later migrations canceled:  
     PG::InsufficientPrivilege: ERROR: must be owner of extension evr  
When running 'foreman-rake db:migrate' manually, I get the exact step it's failing on:
 [archy@katello01 ~]$ sudo foreman-rake db:migrate  
 load average: 0.26 0.84 0.74  
 == 20240924161240 KatelloRecreateEvrConstructs: migrating =====================  
 -- extension_enabled?("evr")  
   -> 0.0034s  
 -- execute("DROP EXTENSION evr CASCADE;\n")  
 rake aborted!  
 StandardError: An error has occurred, this and all later migrations canceled:  
 PG::InsufficientPrivilege: ERROR: must be owner of extension evr  
Now with that figured out, we can start to fix the issue. This is usually a one-off Issue and can be fixed by dropping the extension 'evr' in postgresql. Switch to the 'postgres' user and connect to the 'foreman' database:
 [archy@katello01 ~]$ sudo -Hiu postgres  
 [postgres@katello01 ~]$ psql  
 postgres=# \c foreman  
Now remove the extension:
 foreman=# DROP EXTENSION evr CASCADE;  
Output:
 NOTICE: drop cascades to 6 other objects  
 DETAIL: drop cascades to trigger evr_insert_trigger_katello_installed_packages on table katello_installed_packages  
 drop cascades to trigger evr_insert_trigger_katello_rpms on table katello_rpms  
 drop cascades to trigger evr_update_trigger_katello_installed_packages on table katello_installed_packages  
 drop cascades to trigger evr_update_trigger_katello_rpms on table katello_rpms  
 drop cascades to column evr of table katello_rpms  
 drop cascades to column evr of table katello_installed_packages  
 DROP EXTENSION  
Stop the currently running services and rerun the installer to ensure consistency:
 [archy@katello01 ~]$ sudo foreman-maintain service stop  
 [archy@katello01 ~]$ sudo foreman-installer --scenario katello  
The installation should now run through and foreman should be up and running as expected by the end of it. However, this will leave Errata to not be functional.
In order to activate Errata Generation again, we need to recreate the 'evr' extension again:
 [archy@katello01 ~]$ sudo -Hiu postgres   
 [postgres@katello01 ~]$ psql   
 postgres=# \c foreman   
 foreman=# CREATE EXTENSION IF NOT EXISTS "evr" CASCADE;  
Set the owner appropriately:
 [archy@katello01 ~]$ sudo runuser -l postgres -c "psql -d foreman -c \"UPDATE pg_extension SET extowner = (SELECT oid FROM pg_authid WHERE rolname='foreman') WHERE extname='evr';\""  
The extension should now be available and the db migration can be run again:
 [archy@katello01 ~]$ sudo foreman-rake db:migrate --trace  
After the foreman-rake task completed successfully, search for the file responsible for the db migration included during the upgrade:
 [archy@katello01 ~]$ sudo file /usr/share/gems/gems/katello-4.16.*/db/migrate/20240924161240_katello_recreate_evr_constructs.rb  
Run the migration again manually to make sure it executed correctly:
 [archy@katello01 ~]$ sudo foreman-rake console  
 irb(main):001:0> require '/usr/share/gems/gems/katello-4.16.1/db/migrate/20240924161240_katello_recreate_evr_constructs.rb'  
 irb(main):001:0> krec = KatelloRecreateEvrConstructs.new  
 irb(main):001:0> krec.extension_enabled?("evr")  
 irb(main):001:0> krec.up  
 irb(main):001:0> exit()  
The migration tasks have all run now and we can run the installer one final time:
 [archy@katello01 ~]$ sudo foreman-maintain service stop  
 [archy@katello01 ~]$ sudo foreman-installer --scenario katello  
Foreman will be available again after the successfull foreman-installer run including working errata generation.

Feel free to comment and / or suggest a topic.

Comments

Popular posts from this blog

Dynamic DNS with BIND and ISC-DHCP

I personally prefer to work with hostnames instead of ip-addresses. If you have anything like freeipa or active directory, it will do that for you by registering the client you added to your realm to the managed dns and edit the records dynamically. We can achieve the same goal with just bind and isc-dhcp. I'll use a raspberry pi with raspbian 9 for this setup. So here is a quick tutorial on how to configure the isc-dhcp-server to dynamically update bind. First set a static ip to your server. [archy@ddns ~]$ sudo vim /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) # Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf' # Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d auto eth0 iface eth0 inet static address 172.31.30.5 network 172.31.30.0 broadcast 172.31.30.255 netmask 255.255.255.0 ...

Push logs and data into elasticsearch - Part 2 Mikrotik Logs

This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs. Prerequesites: You'll need a working Elasticsearch Cluster with Logstash and Kibana. Start by getting the Log Data you want to structure parsed correctly. Mikrotik Logs are a bit difficult since they show you Data in the interface which is already enriched with Time / Date. That means a message that the remote logging will send to Logstash will look like this: firewall,info forward: in:lan out:wan, src-mac aa:bb:cc:dd:ee:ff, proto UDP, 172.31.100.154:57061->109.164.113.231:443, len 76 You can check them in the grok debugger and create your own filters and mapping. The following is my example which might not fit your needs. Here are some custom patterns I wrote for my pattern matching: MIKROTIK_DATE \b(?:jan(?:uary)?|feb(?:ruary)?|mar(?:ch)?|apr(?:il)?|may|jun(?:e)?|jul(?...

LACP-Teaming on CentOS 7 / RHEL 7

What is teaming? Teaming or LACP (802.3ad) is a technique used to bond together multiple interfaces to achieve higher combined bandwith. NOTE: every clients speed can only be as high as the single link speed of one of the members. That means, if the interfaces I use in the bond have 1 Gigabit, every client will only have a maximum speed of 1 Gigabit. The advantage of teaming is, that it can handle multiple connections with 1 Gigabit. How many connections depends on the amount of your network cards. I'm using 2 network cards for this team on my server. That means I can handle 2 Gigabit connections at full rate on my server provided the rest of the hardware can deliver that speed. There also exists 'Bonding' in the Linux world. They both do the same in theory but  for a detailed comparison check out this  article about teaming in RHEL7 . To create a teaming-interface, we will first have to remove all the interface configurations we've done on the (soon to be) sla...

FreeIPA - Integrating your DHCPD dynamic Updates into IPA

I recently went over my network configuration and noticed that the dhcp-leases were not pushed into the IPA-DNS yet. So I thought, why not do it now. The setup is very similar to setting it up on a single bind instance not managed by IPA (I've already written a guide about this here ). My setup is done with the following hosts: ipa01.archyslife.lan - 172.31.0.1 inf01.archyslife.lan - 172.31.0.5 First of all, create a rndc-key: [archy@ipa01 ~]$ sudo rndc-confgen -a -b 512 This will create the following file '/etc/rndc-key' [archy@ipa01 ~]$ sudo cat /etc/rndc.key key "rndc-key" { algorithm hmac-md5; secret "secret_key_here=="; }; We also need to make named aware of the rndc-key and allow our remote dhcp server to write dns entries: [archy@ipa01 ~]$ sudo vim /etc/named.conf ... include "/etc/rndc-key"; controls { inet 172.31.0.1 port 953 allow { 172.31.0.5; } keys ...

Creating a pgpool-II based PostgreSQL Cluster

This time I'm going to do a small and quick walkthrough for a postgresql cluster install. I assume you have a clean install of CentOS 7.3 with all updates. The configuration itself is surprisingly simple. The enviroment I'm working with is: Node1: Hostname: pgsql01.archyslife.lan IP: 172.31.10.31 Member of IPA-Domain Selinux: enforcing Node2: Hostname: pgsql02.archyslife.lan IP: 172.31.10.32 Member of IPA-Domain Selinux: enforcing Cluster: Main Node: pgsql01.archyslife.lan Replica: pgsql02.archyslife.lan Virtual IP: 172.31.10.33 for the sake completeness I'll be adding a A-Record entry in the IPA-DNS. Let's start with the configuration of each node. First I will completely setup the Master without restarting the services, afterwards the replica will follow. Steps necessary for both nodes. Add the pgsql-repo to yum. [archy@pgsql01 ~]$ sudo yum -y install https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-centos96-9.6...