First we need to create 2 EC2 instances i.e t2 medium for master node and t2 micro for worker node. And allow ports for their connection.
Run all below command on both master and worker nodes
sudo apt update -y
sudo apt install docker.io -y
sudo systemctl start docker
sudo systemctl enable docker
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update -y sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
Run these commands on the master node.
sudo su
kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
sudo apt-get update
sudo apt-get -y install containerd
kubeadm token create --print-join-command
Run these commands on the worker node.
sudo su
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
sudo apt-get update
sudo apt-get -y install containerd
kubeadm reset pre-flight checks
-----> Paste the Join command on worker node with --v=5
Note: Containerd is a container runtime that provides a set of high-level APIs to manage the lifecycle of container images and containers. Kubernetes is a container orchestration platform that automates the deployment, scaling, and management of containerized applications.
Yeaaaaah successfully created the Kubernetes cluster through kubeadm..
Also Read: Provision and Manage AWS EC2 Instances and S3 Bucket Using Terraform IaC
Now it's time for Deploying Nginx on a Kubernetes cluster that requires the creation of two important files: the deployment.yaml and service.yaml. These files are crucial as they provide instructions to Kubernetes on how to manage the deployment of Nginx and how to make it accessible to other components of the cluster. The deployment.yaml file defines the desired state of the Nginx deployment, including the number of replicas and the container image to be used. On the other hand, the service.yaml file specifies how the Nginx deployment should be exposed to the rest of the cluster by defining the network endpoints and the ports to be used. Together, these files enable seamless deployment and management of Nginx on a Kubernetes cluster, providing a reliable and scalable solution for web serving.
kubectl apply -f deploy.yaml
kubectl apply -f service.yaml
kubectl get svc
kubectl get pods
kubectl cluster-info
Now check deployment locally
curl 52.66.204.133:30007
After verifying Nginx's functionality locally, we can now assess its performance globally using ngrok. Ngrok provides a secure way to expose a web server running locally to the internet, making it possible to test and debug web applications from anywhere. By using ngrok, we can verify Nginx's accessibility and responsiveness from a global perspective, ensuring that it performs well in a real-world scenario.
Commands to install ngrok (Master Nodes)
sudo snap install ngrok
ngrok config add-authtoken 2MOTCXpmZcoDRE7gGp2QFVj5ZAr_X3qCwSeDgS1osnEXXrDU
ngrok http 52.66.204.133:30007
In conclusion,
we have connected the master node and worker node using Kubeadm and deployed Nginx on our Kubernetes cluster. We have also verified the deployment's functionality both locally and globally using ngrok. This achievement showcases the power and flexibility of Kubernetes as a platform for container orchestration and highlights the importance of efficient and scalable web serving in modern software development.