Skip to main content

OpenShift - Sync LDAP Groups

OpenShift offers the capability to leverage a Directory Service as an authentication source for user lookup. However, it lacks native functionality for automatic group synchronization into its Identity Management system. To achieve group syncing, we will implement a workaround solution. This involves deploying a cronjob that synchronizes groups from the source LDAP (FreeIPA in this instance) to OpenShift's OAuth service.

Start by creating a dedicated namespace to encapsulate all the resources required for the automated synchronization of LDAP groups with OpenShift.

 $ cat << EOF > 0-namespace.yml  
 ---  
 apiVersion: v1  
 kind: Namespace  
 metadata:  
  name: ldap-group-sync  
 ...  
 EOF  
 $ oc apply -f 0-namespace.yml

Given the use of LDAPS with certificates signed by a custom Certificate Authority (CA), it is necessary to create:

  • A ConfigMap to store the CA certificate.
  • A Secret to securely store the password for the service account used for LDAP lookups."

 $ oc -n ldap-group-sync create configmap ldap-cacert --from-file ca.crt=/etc/ipa/ca.crt  
 $ oc -n ldap-group-sync create secret generic ldap-bindpw --from-literal bindPassword='super-secure-password-for-ldap-bind-user'  

The jobs executed by the CronJob will require elevated privileges to manage group configurations and memberships. To address this requirement, we will create:

  • A ServiceAccount
  • A ClusterRole to define the necessary permissions
  • A RoleBinding to associate the ClusterRole with the ServiceAccount.
Here's the manifest for the Serviceaccount:

 $ cat << EOF > 1-serviceaccount.yml  
--- apiVersion: v1 kind: ServiceAccount metadata: name: ldap-group-syncer namespace: ldap-group-sync ... EOF $ oc apply -f 1-serviceaccount.yml

The Clusterrole adds privileges to manage group objects:

 $ cat << EOF > 2-clusterrole.yml  
 ---  
 apiVersion: rbac.authorization.k8s.io/v1  
 kind: ClusterRole  
 metadata:  
   namespace: ldap-group-sync  
   name: ldap-group-syncer  
 rules:  
   - apiGroups:  
       - user.openshift.io  
     resources:  
       - groups  
     verbs:  
       - get  
       - list  
       - create  
       - update  
 ...  
 EOF  
 $ oc apply -f 2-clusterrole.yml  

Definition for the Clusterrolebinding tying it all together:

 $ cat << EOF > 3-clusterrolebinding.yml  
 ---  
 apiVersion: rbac.authorization.k8s.io/v1  
 kind: ClusterRoleBinding  
 metadata:  
   name: ldap-group-syncer  
 subjects:  
   - kind: ServiceAccount  
     name: ldap-group-syncer  
     namespace: ldap-group-sync  
 roleRef:  
   apiGroup: rbac.authorization.k8s.io  
   kind: ClusterRole  
   name: ldap-group-syncer  
 ...  
 EOF  
 $ oc apply -f 3-clusterrolebinding.yml  

The subsequent step involves creating a ConfigMap. This ConfigMap will encapsulate the LDAP Sync Configuration. This configuration encompasses details such as attribute mappings, base search parameters, filters, and the bind account credential references:

 $ cat << EOF > 4-configmap.yml  
 ---  
 apiVersion: v1  
 kind: ConfigMap  
 metadata:  
   namespace: ldap-group-sync  
   name: ldap-group-syncer  
 data:  
   sync.yaml: |  
     ---  
     apiVersion: v1  
     kind: LDAPSyncConfig  
     url: ldaps://ipa03.archyslife.lan:636  
     insecure: false  
     ca: /etc/ldap-ca/ca.crt  
     bindDN: uid=openshift-bind,cn=users,cn=accounts,dc=archyslife,dc=lan  
     bindPassword:  
       file: /etc/secrets/bindPassword  
     rfc2307:  
       groupsQuery:  
         baseDN: 'cn=groups,cn=accounts,dc=archyslife,dc=lan'  
         scope: sub  
         derefAliases: never  
         pageSize: 0  
         filter: (cn=ocpadmins)  
       groupUIDAttribute: dn  
       groupNameAttributes: [ cn ]  
       groupMembershipAttributes: [ member ]  
       usersQuery:  
         baseDN: 'cn=users,cn=accounts,dc=archyslife,dc=lan'  
         scope: sub  
         derefAliases: never  
         pageSize: 0  
       userUIDAttribute: dn  
       userNameAttributes: [ uid ]  
       tolerateMemberNotFoundErrors: false  
       tolerateMemberOutOfScopeErrors: false  
     ...  
 ...  
 EOF  
 $ oc apply -f 4-configmap.yml  

The last piece of the puzzle is the CronJob configuration:

 $ cat << EOF > 5-cronjob.yml  
 ---  
 apiVersion: batch/v1  
 kind: CronJob  
 metadata:  
   name: ldap-group-syncer  
   namespace: ldap-group-sync  
 spec:  
   timeZone: "Europe/Prague"  
   schedule: "*/30 * * * *"  
   concurrencyPolicy: Forbid  
   jobTemplate:  
     spec:  
       backoffLimit: 0  
       ttlSecondsAfterFinished: 1800  
       template:  
         spec:  
           restartPolicy: Never  
           terminationGracePeriodSeconds: 30  
           activeDeadlineSeconds: 500  
           dnsPolicy: ClusterFirst  
           serviceAccountName: ldap-group-syncer  
           containers:  
             - name: ldap-group-sync  
               image: registry.redhat.io/openshift4/ose-cli@sha256:a0fc3e52059468c0176d917e026310d91230f276913086ba678635179373a11a  
               command:  
                 - "/bin/bash"  
                 - "-c"  
                 - "oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm"  
               volumeMounts:  
                 - mountPath: /etc/config  
                   name: ldap-sync-volume  
                 - mountPath: /etc/secrets  
                   name: ldap-bind-password  
                 - mountPath: /etc/ldap-ca  
                   name: ldap-ca-cert  
           volumes:  
             - name: ldap-sync-volume  
               configMap:  
                 name: ldap-group-syncer  
             - name: ldap-bind-password  
               secret:  
                 secretName: ldap-bindpw  
             - name: ldap-ca-cert  
               configMap:  
                 name: ldap-cacert  
 ...  
 EOF  
 $ oc apply -f 5-cronjob.yml  

This CronJob will trigger the execution of jobs. These jobs will subsequently spawn pods. These pods will then facilitate the synchronization of groups from the source LDAP directory, as defined within the ConfigMap, into the OpenShift environment.

While technically optional, this next step is highly recommended. It involves binding ClusterRoles to groups, which will be automatically synchronized from the CronJob previously configured.

 $ cat << EOF > 6-clusterrolebinding.yml  
 ---  
 apiVersion: rbac.authorization.k8s.io/v1  
 kind: ClusterRoleBinding  
 metadata:  
   name: ocpadmins-cluster-admin  
 subjects:  
   - apiGroup: rbac.authorization.k8s.io  
     kind: Group  
     name: ocpadmins  
 roleRef:  
   apiGroup: rbac.authorization.k8s.io  
   kind: ClusterRole  
   name: cluster-admin  
 ...  
 EOF  
 $ oc apply -f 6-clusterrolebinding.yml  
This Clusterrolebinding will automatically provide all members of the 'ocpadmins' with the 'cluster-admin' clusterrole.

Upon completion of this final step, the scheduled synchronization jobs will dynamically adjust group configurations within the OpenShift environment to accurately reflect the current state of group memberships as defined in the underlying LDAP Directory Service.

Feel free to comment and / or suggest a topic.

Sources:

Comments

Popular posts from this blog

Dynamic DNS with BIND and ISC-DHCP

I personally prefer to work with hostnames instead of ip-addresses. If you have anything like freeipa or active directory, it will do that for you by registering the client you added to your realm to the managed dns and edit the records dynamically. We can achieve the same goal with just bind and isc-dhcp. I'll use a raspberry pi with raspbian 9 for this setup. So here is a quick tutorial on how to configure the isc-dhcp-server to dynamically update bind. First set a static ip to your server. [archy@ddns ~]$ sudo vim /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) # Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf' # Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d auto eth0 iface eth0 inet static address 172.31.30.5 network 172.31.30.0 broadcast 172.31.30.255 netmask 255.255.255.0 ...

LACP-Teaming on CentOS 7 / RHEL 7

What is teaming? Teaming or LACP (802.3ad) is a technique used to bond together multiple interfaces to achieve higher combined bandwith. NOTE: every clients speed can only be as high as the single link speed of one of the members. That means, if the interfaces I use in the bond have 1 Gigabit, every client will only have a maximum speed of 1 Gigabit. The advantage of teaming is, that it can handle multiple connections with 1 Gigabit. How many connections depends on the amount of your network cards. I'm using 2 network cards for this team on my server. That means I can handle 2 Gigabit connections at full rate on my server provided the rest of the hardware can deliver that speed. There also exists 'Bonding' in the Linux world. They both do the same in theory but  for a detailed comparison check out this  article about teaming in RHEL7 . To create a teaming-interface, we will first have to remove all the interface configurations we've done on the (soon to be) sla...

Push logs and data into elasticsearch - Part 2 Mikrotik Logs

This is only about the setup of different logging, one being done with Filebeat and the other being done with sending logging to a dedicated port opened in Logstash using the TCP / UDP Inputs. Prerequesites: You'll need a working Elasticsearch Cluster with Logstash and Kibana. Start by getting the Log Data you want to structure parsed correctly. Mikrotik Logs are a bit difficult since they show you Data in the interface which is already enriched with Time / Date. That means a message that the remote logging will send to Logstash will look like this: firewall,info forward: in:lan out:wan, src-mac aa:bb:cc:dd:ee:ff, proto UDP, 172.31.100.154:57061->109.164.113.231:443, len 76 You can check them in the grok debugger and create your own filters and mapping. The following is my example which might not fit your needs. Here are some custom patterns I wrote for my pattern matching: MIKROTIK_DATE \b(?:jan(?:uary)?|feb(?:ruary)?|mar(?:ch)?|apr(?:il)?|may|jun(?:e)?|jul(?...

FreeIPA - Integrating your DHCPD dynamic Updates into IPA

I recently went over my network configuration and noticed that the dhcp-leases were not pushed into the IPA-DNS yet. So I thought, why not do it now. The setup is very similar to setting it up on a single bind instance not managed by IPA (I've already written a guide about this here ). My setup is done with the following hosts: ipa01.archyslife.lan - 172.31.0.1 inf01.archyslife.lan - 172.31.0.5 First of all, create a rndc-key: [archy@ipa01 ~]$ sudo rndc-confgen -a -b 512 This will create the following file '/etc/rndc-key' [archy@ipa01 ~]$ sudo cat /etc/rndc.key key "rndc-key" { algorithm hmac-md5; secret "secret_key_here=="; }; We also need to make named aware of the rndc-key and allow our remote dhcp server to write dns entries: [archy@ipa01 ~]$ sudo vim /etc/named.conf ... include "/etc/rndc-key"; controls { inet 172.31.0.1 port 953 allow { 172.31.0.5; } keys ...

SSSD - Debugging PAM permission denied

Sometimes there's weird errors in IT that occur on random chance. I've had such an encounter with SSSD in combination with IPA(+AD-Trust) recently, where only sometimes, a connection to one of the IPA-Servers would fail with this error: Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: pam_sss(sshd:account): Access denied for user runner: 4 (System error) Jul 13 13:36:42 ipa02.archyslife.lan sshd[3478]: fatal: Access denied for user runner by PAM account configuration [preauth] In my case, it was only happening sometimes when running a basic system setup role using ansible on every host in the entire environment. This way, there was no consistent pattern besides being the same host every time if it failed. First up, add the 'debug_level=X' to every section required in the /etc/sssd/sssd.conf where X is a number from 1 to 10 with 10 being the most verbose. Afterward, restart sssd and check the logs for any obvious problems. 1) If you are using local users, check the...