Setup Kubernetes cluster using kubeadm in vSphere virtual machines.

Giridharaprasad
5 min readJan 27, 2020

In this article, let me walkthrough my experience in setting up a Kubernetes cluster on the virtual machines created on VMware vSphere virtual machines.

Create Virtual Machines.

Let us create the required number of virtual machines for setting up cluster using the preferred operating system. Here, I am going with Ubuntu-18.04.3. I have planned to setup a cluster using single control plane(master) and three worker nodes.

Each node should be equipped with at least 2GB memory, 20GB disk space and 2vCPUs. To make the disk space usage optimal in VMware, enable thin provisioning while creating virtual disk.

Let us customise the virtual machines with the preferred configuration and start booting through ISO. Once the virtual machines are created successfully, go ahead with the below steps to configure a Kubernetes cluster.

Setup Networking

Based on your networking solution, configure network settings in the virtual machines. Ensure that all the machines are connected to each other.

Setup hostname(Optional)

Setup meaningful hostname in all the nodes if necessary.

sudo hostnamectl set-hostname <hostname>

Reboot the machine to make the change effective.

Enable ssh on the machines

If ssh is not configured, install openssh-server on the virtual machines and enable connectivity between them.

sudo apt-get install openssh-server -y

Disable swap on the virtual machines.

As a super user, disable swap on all the machines. Execute the below command to disable swap on the machines.

swapoff -a

In order to disable swap permanently , comment out swap entry in /etc/fstabfile.

This can be verified using the following command.

root@host1:~# free -h
total used free shared buff/cache available
Mem: 7.8G 990M 6.0G 13M 797M 6.6G
Swap: 2.0G 0B 2.0G

Note: This has to be done on all the machines.

Install necessary Packages

Let us install curl and `apt-transport-https` in all the machines.

sudo apt-get update && sudo apt-get install -y apt-transport-https curl

Obtain the Key for the kubernetes repository and add it to your local key-manager by executing the below command.

root@host1:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK

After adding the above key, execute the below command to add the kubernetes repo to your local system.

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

kubeadm, kubectl and kubelet installation

After adding the above install kubeadm, kubelet and kubectl in all the machines.

sudo apt-get updatesudo apt-get install -y kubelet kubeadm kubectl

After installing the above packages, let us hold them as it is in the machine by executing the following command.

root@host1:~# sudo apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.

Install Container Runtime

In each node, container runtime (CRI) component should be installed to manage the containers. In this setup, I will install the container runtime `docker` by executing the below command.

sudo apt-get install docker.io -y

Install Control plane

In the master node, execute kubeadm init command to deploy control plane components

kubeadm init --pod-network-cidr=192.168.2.0/16

When the above command execution is successful, it will yield a command to be executed on all the worker nodes to configure them with the master.

Worker nodes.

After configuring the master node successfully, configure the worker nodes by executing the join command displayed in master node.

kubeadm join x.x.x.x:6443 --token <token>\
--discovery-token-ca-cert-hash <hash>

Accessing Cluster

You can communicate with the cluster components using kubectl interface. In order to communicate, you need kubernetes cluster config file to be placed in the home directory of the user from where you want to access the cluster. Once the cluster is created, a file named admin.conf will be generated in /etc/kubernetes directory. This file has to be copied to the home directory of target user.

Let us execute the below commands from the non-root user to access cluster from that respective user.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After setting up the kubeconfig file , check the node status. All the machines will be in not ready state.

k8s@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 5m41s v1.17.2
host1 NotReady <none> 3m2s v1.17.2
host2 NotReady <none> 2m58s v1.17.2
host3 NotReady <none> 2m54s v1.17.2

And you can observe that coredns pod is not started.

NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system coredns-6955765f44-9nlw5 0/1 Pending 0 4m33s
kube-system coredns-6955765f44-wjxj2 0/1 Pending 0 4m33s
kube-system etcd-master 1/1 Running 0 4m45s
kube-system kube-apiserver-master 1/1 Running 0 4m45s
kube-system kube-controller-manager-master 1/1 Running 0 4m45s
kube-system kube-proxy-bzcbw 1/1 Running 0 2m6s
kube-system kube-proxy-clmpz 1/1 Running 0 2m14s
kube-system kube-proxy-crx5v 1/1 Running 0 4m32s
kube-system kube-proxy-xcmlv 1/1 Running 0 2m10s
kube-system kube-scheduler-master 1/1 Running 0 4m45s

This will be resolved when you deploy network CNI plugin in the cluster. Here, I will deploy calico by executing the following command in the master node.

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

In next few minutes, your cluster will be created successfully. Check the node status and ensure the successful creation.

k8s@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 50m v1.17.2
host1 Ready <none> 47m v1.17.2
host2 Ready <none> 47m v1.17.2
host3 Ready <none> 47m v1.17.2

You can check the cluster state by executing the following command.

k8s@master:~$ kubectl get pods --all-namespacesNAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default abc1-b95b76d84-2qmhw 1/1 Running 0 2m41s
kube-system calico-kube-controllers-5c45f5bd9f-r9rxj 1/1 Running 0 4m59s
kube-system calico-node-bd4tx 1/1 Running 0 5m
kube-system calico-node-lxk75 1/1 Running 0 5m
kube-system calico-node-zmnn4 1/1 Running 0 5m
kube-system calico-node-zzvhk 1/1 Running 0 5m
kube-system coredns-6955765f44-9nlw5 1/1 Running 0 10m
kube-system coredns-6955765f44-wjxj2 1/1 Running 0 10m
kube-system etcd-master 1/1 Running 0 10m
kube-system kube-apiserver-master 1/1 Running 0 10m
kube-system kube-controller-manager-master 1/1 Running 0 10m
kube-system kube-proxy-bzcbw 1/1 Running 0 8m19s
kube-system kube-proxy-clmpz 1/1 Running 0 8m27s
kube-system kube-proxy-crx5v 1/1 Running 0 10m
kube-system kube-proxy-xcmlv 1/1 Running 0 8m23s
kube-system kube-scheduler-master 1/1 Running 0 10m

Now, the kubernetes cluster has been created successfully. You can verify this by setting up a deployment/pod.

k8s@master:~$ kubectl create deploy nginx --image=nginxdeployment.apps/nginx created

You can check the pod status by executing the below command.

k8s@master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-86c57db685-rpzm2 1/1 Running 0 70s

Deleting cluster.

Kubernetes cluster can be teared down by executing the below single command.

sudo kubeadm reset

Thus, a cluster can be deleted.

Happy learning…

--

--