We'll be working on the Servers that are surrounded by the continous lines in this drawing:
Most of the Setup is already done, from here on out the heavylifting will be done from the installer. But first, there's still a few small things left to do: getting the installation artifacts.
Specifically, I'm talking about these artifacts that still need to be downloaded:
- fedora coreos image
- openshift-installer
- openshift-client
- helm
- butane
Since we've set up a shared-storage for the webservers that will host these files, they will only need to be downloaded once and can be served from the interal share. I'll download all the artifacts on one of the helper nodes:
[archy@helper01 ~]$ sudo -Hiu root
[root@helper01 ~]# curl -4kLo '/var/www/html/okd/fcos-39.iso' -X 'GET' -H 'Accept: application/octet-stream' -H 'User-Agent: curl/7.76.1' 'https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/39.20240210.3.0/x86_64/fedora-coreos-39.20240210.3.0-live.x86_64.iso'
[root@helper01 ~]# curl -4kLo '/var/www/html/okd4/fcos-39.raw.xz' -X 'GET' -H 'Accept: application/octet-stream' -H 'User-Agent: curl/7.76.1' 'https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/39.20240210.3.0/x86_64/fedora-coreos-39.20240210.3.0-metal.x86_64.raw.xz'
[root@helper01 ~]# curl -4kLo '/var/www/html/okd4/fcos-39raw.xz.sig' -X 'GET' -H 'Accept: application/octet-stream' -H 'User-Agent: curl/7.76.1' 'https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/39.20240210.3.0/x86_64/fedora-coreos-39.20240210.3.0-metal.x86_64.raw.xz.sig'
[root@helper01 ~]# curl -4kLo '/var/www/html/okd4/openshift-install-linux.tar.gz' -X 'GET' -H 'Accept: application/octet-stream' -H 'User-Agent: curl/7.76.1' 'https://github.com/okd-project/okd/releases/download/4.15.0-0.okd-2024-03-10-010116/openshift-install-linux-4.15.0-0.okd-2024-03-10-010116.tar.gz'
[root@helper01 ~]# curl -4kLo '/var/www/html/okd4/openshift-client-linux.tar.gz' -X 'GET' -H 'Accept: application/octet-stream' -H 'User-Agent: curl/7.76.1' 'https://github.com/okd-project/okd/releases/download/4.15.0-0.okd-2024-03-10-010116/openshift-client-linux-4.15.0-0.okd-2024-03-10-010116.tar.gz'
[root@helper01 ~]# curl -4kLo '/var/www/html/okd4/helm.tar.gz' -X 'GET' -H 'Accept: application/octet-stream' -H 'User-Agent: curl/7.76.1' 'https://get.helm.sh/helm-v3.14.4-linux-amd64.tar.gz'
[root@helper01 ~]# curl -4kLo '/var/www/html/okd4/butane' -X 'GET' -H 'Accept: application/octet-stream' -H 'User-Agent: curl/7.76.1' 'https://github.com/coreos/butane/releases/download/v0.20.0/butane-x86_64-unknown-linux-gnu'
[root@helper01 ~]# chown -R apache:apache /var/www/html/okd4
[root@helper01 ~]# chmod -R 755 /var/www/html/okd4
[root@helper01 ~]# restorecon -rv /var/www/html
All artifacts required for the installation are downloaded. Next step: installing the required binaries and prepare the environment.
The only two required artifacts are openshift-install and openshift-client. I'll also install helm and butane for now since they'll be useful later on when customizing the cluster or installing applications. The installation of all these is just unarchiving and moving the binaries to /usr/bin since they're all golang-based.
NOTE: this step will have to be repeated on all helper nodes
[root@helper01 ~]# mkdir -p -m 755 /var/tmp/install
[root@helper01 ~]# tar -xvpzf /var/www/html/okd4/openshift-install-linux.tar.gz -C /var/tmp/install/
[root@helper01 ~]# tar -xvpzf /var/www/html/okd4/openshift-client-linux.tar.gz -C /var/tmp/install/
[root@helper01 ~]# tar -xvpzf /var/www/html/okd4/helm.tar..gz -C /var/tmp/install/
[root@helper01 ~]# install -o 'root' -g 'root' -m '755' /var/tmp/install/{openshift-install,oc} /usr/bin/
[root@helper01 ~]# install -o 'root' -g 'root' -m '755' /var/tmp/install/linux-amd64/helm /usr/bin/
[root@helper01 ~]# install -o 'root' -g 'root' -m '755' /var/www/html/okd4/butane /usr/bin/
With all the binaries installed, I'll create a dedicated install directory. This will make it easier to sync files and manifests between nodes.
[root@helper01 ~]# mkdir -p -m 750 openshift-installer
Additionally, we'll also have to create a ssh-key in order to be able to ssh as the 'core' user in case kubelet fails. This will also be prove usefull when checking the bootstrap process. I'll also start a ssh-agent to attach the key to and make it attachable for future sessions.
[root@helper01 ~]# ssh-keygen -a 128 -t ed25519 -C "okd $(date +%F)" -f "${HOME}/.ssh/$(date +%F)-okd-ed25519" -N '' -Z 'chacha20-poly1305@openssh.com'
[root@helper01 ~]# ssh-agent -s | tee ~/.ssh/environment-$(hostname -s) && source ~/.ssh/environment-$(hostname -s)
[root@helper01 ~]# ssh-add ~/.ssh/$(date +%F)-ocp-ed25519
[root@helper02 ~]# ssh-keygen -a 128 -t ed25519 -C "okd $(date +%F)" -f "${HOME}/.ssh/$(date +%F)-okd-ed25519" -N '' -Z 'chacha20-poly1305@openssh.com'
[root@helper02 ~]# ssh-agent -s | tee ~/.ssh/environment-$(hostname -s) && source ~/.ssh/environment-$(hostname -s)
[root@helper02 ~]# ssh-add ~/.ssh/$(date +%F)-ocp-ed25519
I'll write the install-config.yaml on my first helper node and sync it over to the second node later on once all the manifests are generated.
[root@helper01 ~]# ssh-add -L >> ~/openshift-installer/install-config.yaml
With the last command, the first step is already done to write the install-config.yaml.
We'll also need a pull secret for this which can be obtained from here or paste a a bogus one like '{"auths":{"fake":{"auth": "foo"}}}':
[root@helper01 ~]# cat pull-secret.txt >> ~/openshift-installer/install-config.yaml
Two blocks of the install-config are already present, now we have to create the rest of it:
[root@helper01 ~]# vim install-config.yaml
Here's the content of my install-config as an example:
apiVersion: v1
baseDomain: archyslife.lan
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: okd
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
fips: false
pullSecret: '{"auths": ...}' # content from pull secret of fake pull secret
sshKey:
- ssh-ed25519 ... # public key from helper node 1
- ssh-ed25519 ... # public key from helper node 2
If the install-config.yaml is done, back it up just in case.
[root@helper01 ~]# if [ ! -d /var/backup/openshift-installer ];then mkdir -p -m 700 /var/backup/openshift-installer; fi
[root@helper01 ~]# cp -v ~/openshift-installer/install-config.yaml /var/backup/openshift-installer/install-config.yaml
Now, we can continue to create the manifests. I'll also make sure to create a backup after each step just so I have a easy way to get the files and review them if needed:
[root@helper01 ~]# openshift-install create manifests --dir ~/openshift-installer
[root@helper01 ~]# tar -cvpf /var/backup/openshift-installer/openshift-installer-0.tar ~/openshift-installer
Since we've specified above that there are 0 worker nodes, the masters will be marked as schedulable. Since there will be worker nodes, this can be reverted:
[root@helper01 ~]# sed -i 's/mastersSchedulable: true/mastersSchedulable: False/' openshift-installer/manifests/cluster-scheduler-02-config.yml
[root@helper01 ~]# tar -cvpf /var/backup/openshift-installer/openshift-installer-1.tar ~/openshift-installer
Since the masters are now tainted again, we can proceed with the ignition configs used by CoreOS:
[root@helper01 ~]# openshift-install create ignition-configs --dir ~/openshift-installer
[root@helper01 ~]# tar -cvpf /var/backup/openshift-installer/openshift-installer-2.tar ~/openshift-installer
The installation files are prepared now and can be synced to the web directory in order to be available to all nodes:
[root@helper01 ~]# rsync -vrlptgoDxhP -- ~/openshift-installer/ /var/www/html/okd/
[root@helper01 ~]# chown -R apache:apache /var/www/html/okd
[root@helper01 ~]# chmod -R 755 /var/www/html/okd
[root@helper01 ~]# restorecon -rv /var/www/html/okd
Before continuing, I'd recommend testing the availability of the files. A simple curl will do:
[root@helper01 ~]# KEEPALIVEDVIP='172.31.10.240'
[root@helper01 ~]# curl -4kLX 'GET' -H 'Referer: openshift-installer' -H 'User-Agent: curl/8.2.1' -H 'Accept: application/json' 'http://${KEEPALIVEDVIP}:8080/metadata.json' | jq -C
Here's one of the downsides in using the UPI: the installation commands will have to be entered manually on the nodes. So boot the nodes using the coreos iso downloaded previously and enter these commands according to each node type:
Bootstrap Node:
[core@coreos-installer ~]$ export KEEPALIVEDVIP='172.31.10.240'
[core@coreos-installer ~]$ sudo -E coreos-installer install /dev/vda -a x86_64 -s stable -u http://${KEEPALIVEDVIP}:8080/okd/fcos-39.raw.xz -I http://${KEEPALIVEDVIP}:8080/okd/bootstrap.ign --insecure --insecure-ignition
Master Nodes:
[core@coreos-installer ~]$ export KEEPALIVEDVIP='172.31.10.240'
[core@coreos-installer ~]$ sudo -E coreos-installer install /dev/vda -a x86_64 -s stable -u http://${KEEPALIVEDVIP}:8080/okd/fcos-39.raw.xz -I http://${KEEPALIVEDVIP}:8080/okd/master.ign --insecure --insecure-ignition
Worker Nodes:
[core@coreos-installer ~]$ export KEEPALIVEDVIP='172.31.10.240'
[core@coreos-installer ~]$ sudo -E coreos-installer install /dev/vda -a x86_64 -s stable -u http://${KEEPALIVEDVIP}:8080/okd/fcos-39.raw.xz -I http://${KEEPALIVEDVIP}:8080/okd/worker.ign --insecure --insecure-ignition
Once the install commands have been entered and the Nodes rebooted, monitor the bootstrap process:
[root@helper01 ~]# openshift-install --dir ~/openshift-installer wait for-bootstrap-complete --log-level=info
The initial Bootstrap might take up to 30 minutes. You can follow the bootstrap Process by ssh-ing into the bootstrap node and run the command 'sudo journalctl -b -f -u bootkube.service -u kubelet.service'.
Since the preparations are all done, now is a great time to sync the files between the nodes.
Here's a nice trick using netcat listeners:
[root@helper02 ~]# nc -lvnp 1337 | tar -xvpf - -C /
[root@helper01 ~]# tar -cvpf - /root/openshift-installer /var/backup/openshift-installer | nc -v helper02.okd.archyslife.lan 1337
Creating a archive is required in this case since the netcat listener will only accept one connection.
Feel free to comment and / or suggest a topic.
Comments
Post a Comment