Kubernetes has the ability to automatically scale resources as required by the current load. There are some prerequirements however. With a user-provisioned kubernetes cluster, there's most likely not going to be a metrics-server deployed:
[archy@kube01 ~]$ kubectl -n kube-system get deployments -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
coredns 2/2 2 2 154d coredns k8s.gcr.io/coredns/coredns:v1.8.0 k8s-app=kube-dns
To deploy the metrics-server, go ahead and download the manifest from GitHub:
[archy@kube01 ~]$ curl -k -L -o kubernetes/metrics-server.yml -X GET 'https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml'
In order to enable the metrics-server to communicate with the worker-nodes, we'll have to add the 'kubelet-insecure-tls' option to the manifest args and then apply it:
[archy@kube01 ~]$ vim kubernetes/metrics-server.yml
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
++ - --kubelet-insecure-tls
[archy@kube01 ~]$ kubectl apply -f kubernetes/metrics-server.yml
Now that the metrics server is deployed, you can create the autoscaler:
[archy@kube01 ~]$ kubectl autoscale deployments/speedtest --min 2 --max 4 --cpu-percent 80
This will create autoscaling which will keep at least 2 pods with a maximum of up to 4 pods. The scaling here will depend on the pod's CPU usage.
Here's the autoscaling in yaml format:
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: speedtest
namespace: default
spec:
maxReplicas: 4
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: speedtest
targetCPUUtilizationPercentage: 80
Check if the autoscaler is present as expected:
[archy@kube01 ~]$ kubectl get hpa -o wide
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
speedtest Deployment/speedtest 0%/80% 2 4 2 7h49m
This concludes the autoscaler setup for kubernetes.
Feel free to comment and / or suggest a topic.
Comments
Post a Comment