You are on page 1of 15

#ThreeTierAppChallenge

Lets start todays project. Its simple


Three Tier Project. The challenge involves
deploying a Three-Tier Web Application using
ReactJS, NodeJS, and MongoDB, with
deployment on AWS EKS.
Three tier means your application is divided
into multiple tiers. Like one is
Web/Presentation or Frontend Tier.
Second is Logical Tier where all the business
logic, Api’s, backend servers are present.
And third one is database layer.
So in short, we have ReactJS for frontend,
NodeJS for backend and MongoDB for
database. Can anyone tell me why we are
dockerising these tools. For ease of deployment where your all the packages are in
one container and you can run it where ever you want.
Now for reactjs and nodejs we will write a dockerfile > image > container.
For mongodb its already available online. No need to create. Now that yellow logo
after docker represents AWS ECR which is used to store image data. Now we need
to deploy it on K8S. And we need to create a cluster for kubernetes and their are
so many ways to do that. Like MiniKube, EKS, Kubeadm, AKS, GKE, etc. So their will
be three k8s clusters with the name Frontend, Backend, Database. These all three
will be communicating internally by ‘Service’. Now suppose someone from outside
wants to access this I would need Ingress Controller and for Routing purpose I
would be using ALB or Application Load Balancer. Also we will be seeing HPA
Horizontal Pod Autoscaling and rolling updates. Its just a flow of how the date will
come from Github and process further.
Just fork the repo
“https://github.com/LondheShubham153/TWSThreeTierAppChallenge“
Now we will need a workstation to deploy all the setups. We will use ec2 instance
for this purpose. So create an ec2 server with t2.micro configuration and connect
through ssh.
Just git clone the repo.
Now our first aim is
we must create a docker
file and create an image. And push that image container to ECR. For that we must
know how to create Dockerfile.
To create a Dockerfile from
scratch we will remove the
existing Dockerfile. Now I will create a
Dockerfile by ‘vim Dockerfile’
FROM Node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD [”npm”,”start”]
Also need to install docker on instance.
sudo apt install docker.io
docker ps will give you error ‘permission denied’.
sudo chown $USER /var/run/docker.sock

Create a Dockerfile
docker build -t three-tier-frontend .
docker images
Now run the image to make it container.
docker run -d -p 3000:3000 three-tier-frontend:latest

Also open the port 3000 from inbound rule. Check with the ipv4 address and port
3000. If the application is running that
means your configuration was perfect.
Now, you must install AWS CLI V2
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin --update
aws configure
Also create an IAM Configuration
Create a user eks-admin with AdministratorAccess.
Generate Security Credentials: Access Key and Secret Access Key.

Now I need to create ECR from AWS console.


Inside the created ECR
(three-tier-frontend) click on
‘View Push Command’ button.
And just follow the instructions as
written.
Just follow each and every command as it is.

Now, remove the Dockerfile of backend folder to write it from scratch.


vim Dockerfile
saving that file and creating a new repo in ECR
with name ‘three-tier-backend’
Repeat the same commands again from
‘view push command’

docker build -t three-tier-backend


Abhi iska port number tha 8080.
docker run -d -p 8080:8080 three-tier-
backend:latest
Doing docker ps I can see that my docker is up
and running.

To confirm check logs


docker logs 71f0422a0990
Error aya h ‘could not connect to Database’. Obvious si baat h dataset banaya kaha
h. ECR me push kar dete h is container ko.
Chalo abhi agey chalte h aur kubernetes cluster banate h. Minikube se banaenge.
Minikube Installation - Step by step
Create a New instance on EC2
EC2 >> Launch Instance >> MinikubeServer >> ubuntu >> instance type - t2
medium >> keypair-linuxfordevops >> Launch instance
8 Gb ram is sufficient
Open terminal >> cd downloads >> paste ssh command -(connect t2 medium)

Pre-requisites -
Ubuntu OS
sudo privileges
Internet access
Virtualization support enabled (Check with egrep -c '(vmx|svm)' /proc/cpuinfo,
0=disabled 1=enabled)

Step 1 - Update System Packages


First of all you need to update your instance.
>> sudo apt update

Step 2 - Install Required Packages -


In minikube you use network alot so download all the network packages using below
command.
>> sudo apt install -y curl wget apt-transport-https
Step 4: Install Minikube
First, download the Minikube binary using curl: from google. To run minikube
libraries we need to download this.
>> curl -Lo minikube
https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
To run minikube commands make it executable and move it in bin folder path.
>> chmod +x minikube
>> sudo mv minikube /usr/local/bin/

Step 5: Install kubectl


Download kubectl, which is a Kubernetes command-line tool.
>> curl -LO "https://dl.k8s.io/release/$(curl -L -s
https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
Now, make it executable and move it your bin path to use kubectl:
>> chmod +x kubectl
sudo mv kubectl /usr/local/bin/
Step 6: Start Minikube
Now, you can start Minikube with the following command, but to start running
minikube you will driver eg. virtualbox, docker etc. We already have docker
installed.
>> minikube start --driver=docker
This command will start a single-node Kubernetes cluster inside a Docker
container.
Step 7: Check Cluster Status
To check the cluster status use the below command :
>> minikube status

You can also use kubectl to interact with your cluster:


>> kubectl get po -A

And you will get all the components of Kubernetes listed out. Cluster is ready.
Now observing Database, after carefully observing all the three yaml files we can
say that they are interdependable and need to deploy. The function of
deployment.yaml is to run the containers of MongoDB, secrets.yaml to provide
username and password required to run mongodb and service.yaml let mongodb to
access all the applications. Lets do the deployment of Backend. Now create a
namespace first kubectl create namespace three-tier. Now deploy all the three
files inside Database folder.
kubectl apply -f deployment.yaml
kubectl apply -f secrets.yaml
kubectl apply -f service.yaml
mongodb-svc created
If no type is assigned to service then by
default it assigns ‘ClusterIP’

Now go to Kubernetes-Manifest-file/Backend
and open deployment.yaml file.
Isme under spec: ; type: RollingUpdate h. So
a rolling update is a method used to update a
deployed application or service in a way that
ensures continuous availability and minimal
disruption to users. In containerized
environments, rolling updates are often used
to update applications running in Pods.
As we go down under spec: ; image:
we need to change this image to ecr backend
container path.
Now just deploy the deployment file of backup and check the logs.
kubectl apply -f backend/deployment.yaml
kubectl logs api-64fb8488f8-xq4mk -n three-tier

Explain the deployment.yaml file and service.yaml file and deploy it.
kubectl apply -f Frontend/deployment.yaml
kubectl apply -f Frontend/service.yaml

Now, doing kubectl get pods with namespace gives us all the running pods.
kubectl get pods -n three-tier

To access all this three running pods its important for us to run ingress.yaml and
for that you will need ingress controller. But thats a tricky part in itself. We will
need some manifest files for this that no one writes. A package naem HELM is used
in order to install ingress controller. To install helm we have to install load
balancer. Step 9 has all the commands.
curl -O https://raw.githubusercontent.
com/kubernetes-sigs/aws-load-balancer
-controller/v2.5.4/docs/install/iam_
policy.json
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicyForEKS
--policy-document file://iam_policy.json
creating a policy cluster and load balancer will get connectivity
eksctl utils associate-iam-oidc-provider --region=us-west-2 --cluster=three-tier-
cluster --approve
Kind of gate or lock and key to approve the load balancer
eksctl create iamserviceaccount --cluster=three-tier-cluster --namespace=kube-
system --name=aws-load-balancer-controller --role-name
AmazonEKSLoadBalancerControllerRoleForEKS --attach-policy-
arn=arn:aws:iam::626072240565:policy/AWSLoadBalancerControllerIAMPolicyFor
Eks --approve --region=us-west-2
By this all the service will get access to cluster.

Now to install helm follow step 10.


sudo snap install helm --classic
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n
kube-system --set clusterName=three-tier-cluster --set
serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-
controller
kubectl get deployment -n kube-system aws-load-balancer-controller
kubectl apply -f ingress.yaml

challenge.trainwithshubham.com- Host and its address both we got.


Now for this I will go on GoDaddy website and will create a subdomain with the
name challenge.trainwithshubham.com.

Explain by showing database adding some data.

Now lets talk about HPA (Horizontal Pod Autoscaler)


what is horizontal pod autoscalar in kubernetes?
For example, on a a big billion Day
offer in application. Normal day less
number of users are accessing the
application. On big billion days as
there is a offer, more number of
users will access the application. When number of usersrs
are accessing our application one pod is not sufficient to handle the incoming
traffic, instead of we are increasing and decreasing the pods manually. We can
configure horizontal pod autoscaler that will increase and decrease the pods count
based on the demand how this horizontal pod Auto scalar is going to work.
single pod is available before HPA
scaling and three pods available after
HPA scaling. First we need to install
metric server in our k8s cluster. That
metric server is going to monitor the
resources and the limits that we have
configured. Metric server will give the
inputs to horizontal pod Auto scalar
related to CP utilization or memory
utilization. If the given threshold limit
gets crossed then pods get automatically scaled up. And vice versa.
In repository, their are 2 folders named
k8s_metrics_server-main
and kubernetes_manifest_yml_files-main

deploy all the metrics server files

kubectl get nodes will show the two nodes. Also do kubectl top node. This will show
same two nodes with cpu and memory usage.
Now go to kubernetes_manifest_yml_files-main folder > 05-HPA

Cat 01_Deployment.yml

cat 02_Service.yml
cat 03_HPA.yml

Now deploy all the files 01_Deployment.yml, 02_Service.yml, 03_HPA.yml


Kubectl apply -f 01_Deployment.yml ; kubectl get deploy
kubectl apply -f 02_Service.yml ; kubectl get svc
kubectl apply -f 03_HPA.yml ; kubectl get hpa
To check the deployment is succesfull or not we will use a tool ‘BusyBox’ by using
this command.
kubectl run -i --tty load-generator --rm --image=busybox --restart=Never --
/bin/sh -c "while sleep 0.01; do wget -q -O- http://hpa-demo-deployment; done"

This will increase the load pretending traffic. on duplicate cmd just try this
command. kubectl get pods and kubectl get hpa.
Now break the load by ctrl-c and the load in % will get decrease gradually and
corrosponding pods will also decrease.

You might also like