Professional Documents
Culture Documents
sudo reboot
apt-get update
apt-get install -y kubelet kubeadm kubectl
nano /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
reboot
Restart the Putty sessions & run the following commands in ‘kmaster’
sudo -i
kubeadm init --apiserver-advertise-address=<private-ip-address-of-kmaster-vm> --ignore-preflight-
errors=NumCPU
Copy from ‘Your kubernetes control-plane..’ to the end & paste in notepad.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo-i
kubeadm join 172.31.3.14:6443 --token nl74qm.2nl29cyfq3molb2m \ --discovery-token-ca-cert-hash
sha256:f94d41afd66b0df9b5d5198911bcc43b809f22478751246d94a85e73a5c41890
Now run the following commands in ‘kmaster’
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
kubectl get nodes -o wide --all-namespaces
12september2020
We create a service so that whenever we interact with any container, then whatever app may be running in
that container, we have the corresponding details of it in the service, for eg. Port no. in which the app is
running, name of the container, etc.
If we want to interact with any application we have to go with the help of service. Every app has its own
corresponding service.
What's the difference between a Service and a Deployment in Kubernetes? A deployment is
responsible for keeping a set of pods running. A service is responsible for enabling network access
to a set of pods.
Kubernetes connects more than two pods with the help of cluster IP.
In Kubernetes, nobody can access your container directly. They will interact with pod, which in turn will
interact with container & respond. Thereby Kubernetes helps in securing your cluster.
In Kubernetes, kmaster does not create container, container is created only by knode (unlike docker).
In ‘kmaster’:
kubectl cluster-info
kubectl get node
kubectl get node -o wide --all-namespaces
kubectl get pods -o wide --all-namespaces
https://kubernetes.io/docs/concepts/workloads/pods
In ‘Kmaster’:
sudo -i
cd /
mkdir kube
cd kube
nano pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: pod2
labels:
app: web
type: prod
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
(To expose the container that we have created previously for use by normal users, we have to access the
container in which nginx is installed with the help of kube-proxy. That’s why we create service for interacting
with the container)
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
https://www.tutorialspoint.com/kubernetes/kubernetes_service
14september2020
In ‘kmaster’
sudo –i
cd /
mkdir kube
cd kube/
nano pod1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy1
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/updating-a-deployment
16september2020
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-
volume/
Topic: MULTICONTAINER
In ‘kmaster’
sudo -i
cd /
mkdir deployment
cd deployment/
nano multicontainer.yml
apiVersion: v1
kind: Pod
metadata:
name: multicont
labels:
app: multiapp
spec:
containers:
- name: tomcat1
image: tomcat:8.0
ports:
- containerPort: 8080
- name: nginx1
image: nginx
ports:
- containerPort: 80
Now we have to create two services, one for tomcat & another for nginx.
nano service_multi_nginx.yml
apiVersion: v1
kind: Service
metadata:
name: nginx2
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
nodePort: 30022
selector:
app: multiapp
nano service_multi_tomcat.yml
apiVersion: v1
kind: Service
metadata:
name: tomcat2
spec:
type: NodePort
ports:
- protocol: TCP
port: 8080
nodePort: 30023
selector:
app: multiapp
nano volume1.yml
apiVersion: v1
kind: Pod
metadata:
name: volume12
labels:
app: volumeapp1
spec:
containers:
- name: nginx11
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: ngnixmount
The above mentioned volumes are in container.
volumes:
- name: ngnixmount
hostPath:
# Ensure the file directory is created.
path: /nginx
type: DirectoryOrCreate
In kmaster run:
kubectl get pods
kubectl describe pods volume1
kubectl exec –it volume1 –c nginx11 /bin/bash (to attach to the container)
cd usr/share/nginx/html
ls (you can see file1.txt which is transferred automatically from knode container)
The volumes of container and host can identify each other with the help of ‘name’.
Container is present inside pod and each container has its own image. We want to share the volume of host
machine with the container on knode. The host is working on kmaster. Which means the storage of kmaster
is being equally shared with knode, ie, whatever resource is available on kmaster, is also available on knode.
The ‘mountPath’ is created on knode
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
20september2020
Topic: InitContainer
InitContainer does its assigned job & then terminates by itself. If the initContainer fails, the actual container
won’t run & the pod gets restarted again & again.
sudo –i
nano initcon1.yml
apiVersion: v1
kind: Pod
metadata:
name: initcon1
labels:
app: myapp
spec:
initContainers:
- name: init1
image: busybox
command: ["/bin/sh","-c"]
args: ["mkdir /nginxdir1; echo this is from initcontainer > /nginx2/index.html"]
containers:
- name: nginx2
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginxdir1
volumes:
- name: nginxdir1
hostPath:
# Ensure the file directory is created.
path: /nginxknode
type: DirectoryOrCreate
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
https://github.com/moraes/config/issues/1
https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/