You are on page 1of 10

06september2020

Launch two EC2 instances – Ubuntu 16.04 – t2.xlarge – SG – All traffic


Run the following commands for both instances:

sudo nano /etc/hostname


delete the ip address & enter ‘kmaster’ for master instance & enter ‘knode’ for node instance

sudo nano /etc/hosts


go to the end of the file & enter for both instances:
<private IP of master> kmaster
<private IP of node> knode

sudo nano /etc/ssh/sshd_config


search for ‘PasswordAuthentication’ & change it to ‘yes’

sudo reboot

Restart both the Putty sessions


ping knode (in kmaster)
ping kmaster (in knode)

Install Docker in both ‘kmaster’ & ‘knode’ from Docker documentation


swapoff –a (on kmaster)

Run the following commands on both ‘knode’ & ‘kmaster’:

apt-get update && apt-get install -y apt-transport-https curl


curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y kubelet kubeadm kubectl
nano /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
reboot

Restart the Putty sessions & run the following commands in ‘kmaster’

sudo -i
kubeadm init --apiserver-advertise-address=<private-ip-address-of-kmaster-vm> --ignore-preflight-
errors=NumCPU

Copy from ‘Your kubernetes control-plane..’ to the end & paste in notepad.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now run the following commands in ‘knode’

sudo-i
kubeadm join 172.31.3.14:6443 --token nl74qm.2nl29cyfq3molb2m \ --discovery-token-ca-cert-hash
sha256:f94d41afd66b0df9b5d5198911bcc43b809f22478751246d94a85e73a5c41890
Now run the following commands in ‘kmaster’
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
kubectl get nodes -o wide --all-namespaces

12september2020

We create a service so that whenever we interact with any container, then whatever app may be running in
that container, we have the corresponding details of it in the service, for eg. Port no. in which the app is
running, name of the container, etc.
If we want to interact with any application we have to go with the help of service. Every app has its own
corresponding service.
What's the difference between a Service and a Deployment in Kubernetes? A deployment is
responsible for keeping a set of pods running. A service is responsible for enabling network access
to a set of pods.
Kubernetes connects more than two pods with the help of cluster IP.

There can a maximum of 75 containers inside a pod.


But in production environment, there is only one container inside one pod.

In Kubernetes, nobody can access your container directly. They will interact with pod, which in turn will
interact with container & respond. Thereby Kubernetes helps in securing your cluster.

In Kubernetes, kmaster does not create container, container is created only by knode (unlike docker).

In ‘kmaster’:

kubectl cluster-info
kubectl get node
kubectl get node -o wide --all-namespaces
kubectl get pods -o wide --all-namespaces

https://kubernetes.io/docs/concepts/workloads/pods
In ‘Kmaster’:

sudo -i
cd /
mkdir kube
cd kube
nano pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: pod2
labels:
app: web
type: prod
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80

kubectl create -f pod1.yml


kubectl describe pods pod2
kubectl get pods
nano servicepods1.yml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
nodePort: 30001
selector:
app: web

(To expose the container that we have created previously for use by normal users, we have to access the
container in which nginx is installed with the help of kube-proxy. That’s why we create service for interacting
with the container)

kubectl create -f servicepods1.yml


kubectl get svc

Now go to browser & enter


<public-ip-of-knode>:30001

Viola! Nginx is running

We can create multiple containers inside a pod.

https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

https://www.tutorialspoint.com/kubernetes/kubernetes_service

14september2020

In ‘kmaster’
sudo –i
cd /
mkdir kube
cd kube/
nano pod1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy1
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

kubectl create -f pod1.yml


kubectl get pods -o wide
nano servicepods1.yml
apiVersion: v1
kind: Service
metadata:
name: nginx1
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
nodePort: 30001
selector:
app: nginx

kubectl create -f servicepods1.yml


kubectl get svc
kubectl delete pods nginx-deploy1-66b6c48dd5-2sh4j (delete a pod)
kubectl get pods -o wide (notice that a new pod is automatically created when we delete a pod)
kubectl get rs

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/updating-a-deployment

16september2020

https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-
volume/

Port range: 30001 – 32758

Topic: MULTICONTAINER

In ‘kmaster’
sudo -i
cd /
mkdir deployment
cd deployment/
nano multicontainer.yml
apiVersion: v1
kind: Pod
metadata:
name: multicont
labels:
app: multiapp
spec:
containers:

- name: tomcat1
image: tomcat:8.0
ports:
- containerPort: 8080

- name: nginx1
image: nginx
ports:
- containerPort: 80

kubectl create –f multicontainer.yml


kubectl get pods -o wide

Now we have to create two services, one for tomcat & another for nginx.

nano service_multi_nginx.yml
apiVersion: v1
kind: Service
metadata:
name: nginx2
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
nodePort: 30022
selector:
app: multiapp

nano service_multi_tomcat.yml
apiVersion: v1
kind: Service
metadata:
name: tomcat2
spec:
type: NodePort
ports:
- protocol: TCP
port: 8080
nodePort: 30023
selector:
app: multiapp

kubectl create –f service_multi_nginx.yml


kubectl create –f service_multi_tomcat.yml
kubectl get pods -o wide
kubectl get svc

Now go to browser & enter


<public-ip-of-knode>:30022 -- > nginx is running
<public-ip-of-knode>:30023 -- > tomcat is running

Topic: HOW TO CREATE VOLUME


There are two ways:
1. Create a volume & share it with multiple containers
2. Create individual volume for each container

nano volume1.yml
apiVersion: v1
kind: Pod
metadata:
name: volume12
labels:
app: volumeapp1
spec:
containers:
- name: nginx11
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: ngnixmount
The above mentioned volumes are in container.

volumes:
- name: ngnixmount
hostPath:
# Ensure the file directory is created.
path: /nginx
type: DirectoryOrCreate

kubectl create –f volume1.yml

The above mentioned volumes are in host.


DirectoryOrCreate means if directory is already created, it will mount in that directory. And if it is not created,
it will create a new directory.

Now connect to knode with putty


cd /nginx
touch file1.txt

In kmaster run:
kubectl get pods
kubectl describe pods volume1
kubectl exec –it volume1 –c nginx11 /bin/bash (to attach to the container)
cd usr/share/nginx/html
ls (you can see file1.txt which is transferred automatically from knode container)

The volumes of container and host can identify each other with the help of ‘name’.
Container is present inside pod and each container has its own image. We want to share the volume of host
machine with the container on knode. The host is working on kmaster. Which means the storage of kmaster
is being equally shared with knode, ie, whatever resource is available on kmaster, is also available on knode.
The ‘mountPath’ is created on knode

https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

20september2020
Topic: InitContainer
InitContainer does its assigned job & then terminates by itself. If the initContainer fails, the actual container
won’t run & the pod gets restarted again & again.

sudo –i
nano initcon1.yml
apiVersion: v1
kind: Pod
metadata:
name: initcon1
labels:
app: myapp
spec:
initContainers:
- name: init1
image: busybox
command: ["/bin/sh","-c"]
args: ["mkdir /nginxdir1; echo this is from initcontainer > /nginx2/index.html"]

containers:
- name: nginx2
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginxdir1

volumes:
- name: nginxdir1
hostPath:
# Ensure the file directory is created.
path: /nginxknode
type: DirectoryOrCreate

kubectl create –f initcon1


kubectl get pods
kubectl describe pods
kubectl exec –it initcon1 –c nginx2 /bin/bash

https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

https://github.com/moraes/config/issues/1

https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/

You might also like