With a recent announcement from the kubernetes project, docker will no longer be supported as of kubernetes version 1.22 (end of 2021). What this means in practice is that you'll have to move to some other container engine such as CRI-O or containerd.io. Since containerd (containerd.io rpm) is already included with the docker repos, I'll use these since I've already have them synced in katello.
First up, this is a very simple 3-node setup, I will not cover high availability with etcd or shared storage.
Here's the nodes I will use:
- kube-master01.archyslife.lan
- role: master
- ip: 172.31.10.123
- vcpus: 2
- ram: 4g - kube-worker01.archyslife.lan - worker
- role: worker
- ip: 172.31.10.125
- vcpus: 2
- ram: 4g - kube-worker02.archyslife.lan - worker
- role: worker
- ip: 172.31.10.126
- vcpus: 2
- ram: 4g
Each of them will be running CentOS 7 (reason and I also have not yet migrated to 8-stream since I want to wait on the announcements from Red Hat and releases of Rocky Linux).
Start by installing the required packages, repeat on all nodes:
[archy@kube-master01 ~]$ sudo yum -y install yum-utils device-mapper-persistent-data lvm2 containerd.io kubelet kubeadm kubectl
Enable the kubelet service, repeat on all nodes:
[archy@kube-master01 ~]$ sudo systemctl enable kubelet.service
Create the containerd config directory and dump the config, repeat on each node:
[archy@kube-master01 ~]$ sudo mkdir --mode 755 /etc/containerd
[archy@kube-master01 ~]$ sudo containerd config default | sudo tee /etc/containerd/config.toml
Load the necessary kernel module and configure some system-settings for kubernetes, repeat on all nodes:
[archy@kube-master01 ~]$ sudo modprobe br_netfilter
[archy@kube-master01 ~]$ sudo vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
Apply the configured settings, repeat on all nodes:
[archy@kube-master01 ~]$ sudo sysctl -p
Turn off all swap volumes, partitions and files and remove them from the /etc/fstab, repeaet for all nodes:
[archy@kube-master01 ~]$ sudo swapoff -a
[archy@kube-master01 ~]$ sudo vim /etc/fstab
/dev/mapper/vg_base-lv_swap none swap defaults,x-systemd.device-timeout=0 0 0
Now initiate the cluster on the master. I will use flannel as my network pod so I will have to pass the --pod-network-cidr=10.244.0.0/16 option to kubeadm
[archy@kube-master01 ~]$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
As mentioned, I will use the flannel networking so here's how to apply it:
[archy@kube-master01 ~]$ $ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Now that everything has been initally set up, you can go ahead and join the worker nodes. The required hashes and commands should've been output after the initialization:
[archy@kube-worker01 ~]$ sudo kubeadm join 172.31.10.123:6443 --token some_random_token --discovery-token-ca-cert-hash sha256:some_random_sha256_hash
[archy@kube-worker02 ~]$ sudo kubeadm join 172.31.10.123:6443 --token some_random_token --discovery-token-ca-cert-hash sha256:some_random_sha256_hash
This is an optional step but I prefer to label my nodes accordingly:
[archy@kube-master01 ~]$ kubectl label node kube-worker01.archyslife.lan node-role.kubernetes.io/worker=worker
[archy@kube-master01 ~]$ kubectl label node kube-worker02.archyslife.lan node-role.kubernetes.io/worker=worker
Let's check if the label has been applied:
[archy@kube-master01 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master01.archyslife.lan Ready control-plane,master 42h v1.20.1
kube-worker01.archyslife.lan Ready worker 42h v1.20.1
kube-worker02.archyslife.lan Ready worker 42h v1.20.1
The basic setup has been done for now and your cluster should be usable.
Feel free to comment and / or suggest a topic.
Comments
Post a Comment