Development Kubernetes cluster with Kind
In this series of articles, we will install a multi-node Kubernetes cluster in a single node (or our personal computer) using Kind. Kind will create a full Kubernetes cluster using our current Docker installation; each Kubernetes node will be shown as a Docker container; inside each Docker container, the Kubernetes objects (pods, services, etc.) will be run.
The main idea of this series is to create a cluster that will allow us to test all the needed scenarios that are needed to pass the CKAD certification.
We will configure the basics of a Kubernetes cluster (the cluster itself, the CNI, the dashboard, the metrics server, etc.), and also some additional features like an nginx ingress controller, Prometheus, etc.
Common tools
First of all, we will install Docker and the needed utilities (kubectl, helm, kind):
KUBECTL_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
curl -sSfL -o ~/bin/kubectl "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl"
chmod +x ~/bin/kubectl
HELM_VERSION=$(curl -sSfL https://api.github.com/repos/helm/helm/releases/latest | jq -r '.tag_name')
curl -sSfL https://get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz | tar --strip-components=1 -C ~/bin -zxf - linux-amd64/helm
KIND_VERSION=$(curl -sSfL https://api.github.com/repos/kubernetes-sigs/kind/releases/latest | jq -r '.tag_name')
curl -sSfL -o ~/bin/kind https://kind.sigs.k8s.io/dl/${KIND_VERSION}/kind-linux-amd64
chmod +x ~/bin/kind
K9S_VERSION=$(curl -sSfL https://api.github.com/repos/derailed/k9s/releases/latest | jq -r '.tag_name')
curl -sSfL https://github.com/derailed/k9s/releases/download/${K9S_VERSION}/k9s_Linux_x86_64.tar.gz | tar -C ~/bin -zxf - k9s
curl -sSfL -o ~/bin/hey https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64
chmod +x ~/bin/hey
Kind cluster
Now, we create the Kind cluster, disabling the default CNI plugin (kind
includes kindnet
as its default CNI), as we will install Calico so we can use NetworkPolicies, that are not supported by kindnet
:
cat <<'EOF' | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true
podSubnet: 192.168.0.0/16
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://mirror.gcr.io"]
nodes:
- role: control-plane
image: kindest/node:v1.24.4
- role: worker
image: kindest/node:v1.24.4
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 32080
hostPort: 32080
protocol: TCP
- containerPort: 32443
hostPort: 32443
protocol: TCP
- role: worker
image: kindest/node:v1.24.4
- role: worker
image: kindest/node:v1.24.4
EOF
This configuration contains:
- A single control plane node.
- 3 worker nodes.
- The first of the worker nodes has the label
ingress-ready=true
that will allow us later to configure the nginx controller (as this is a test cluster in a single physical node, we can install and publish the nginx ports used by the ingress controller only in one of the cluster nodes). - We will use the mirror provided by Google, https://mirror.gcr.io, to prevent the rate limiting imposed by Docker Hub.
Finally, install the Calico CNI plugin, and wait for readiness:
kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.23/manifests/calico.yaml
kubectl -n kube-system wait --for condition=ready --timeout=600s pod --selector=k8s-app=calico-kube-controllers
kubectl -n kube-system wait --for condition=ready --timeout=600s pod --selector=k8s-app=kube-dns
kubectl -n kube-system wait --for condition=ready --timeout=600s pod --selector=k8s-app=calico-node
To test our cluster, we can run this sample configuration:
cat <<'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-nginx-deployment
labels:
app.kubernetes.io/name: sample-nginx-app
spec:
selector:
matchLabels:
app.kubernetes.io/name: sample-nginx-app
replicas: 3
template:
metadata:
labels:
app.kubernetes.io/name: sample-nginx-app
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: sample-nginx-svc
labels:
app.kubernetes.io/name: sample-nginx-app
spec:
selector:
app.kubernetes.io/name: sample-nginx-app
ports:
- protocol: TCP
port: 80
EOF
Verify the current objects:
kubectl get pod,deployment,service -l app.kubernetes.io/name=sample-nginx-app
NAME READY STATUS RESTARTS AGE
pod/sample-nginx-deployment-88b4f44-66pt4 1/1 Running 0 84s
pod/sample-nginx-deployment-88b4f44-wxkhf 1/1 Running 0 84s
pod/sample-nginx-deployment-88b4f44-xwvt5 1/1 Running 0 84s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/sample-nginx-deployment 3/3 3 3 85s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/sample-nginx-svc ClusterIP 10.96.12.92 <none> 80/TCP 7s
To access the pod:
kubectl port-forward service/sample-nginx-svc 5000:80
And access to http://127.0.0.1:5000; you should see the Welcome to nginx!
message.