Professional Documents
Culture Documents
# Kube-Worker1
# Kube-Worker2
#https://kubernetes.io/docs/setup/
#https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-
kubeadm/
sudo -i
apt update -y
sudo vi /etc/hosts
172.31.32.57 kmaster-node
#Remember to replace the IPs with those of your systems.
sudo vi /etc/hosts
172.31.34.155 worker-node1
#Remember to replace the IPs with those of your systems.
sudo vi /etc/hosts
172.31.32.19 worker-node2
reboot
#Install Docker ::
#Install curl.
#Get the apt-key and then add the repository from which we will install containerd.
#Next up, we need to modify the containerd configuration file and ensure that the
cgroupDriver is set to systemd. To do so, #edit the following file:
sudo vi /etc/containerd/config.toml
#[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
#And ensure that value of SystemdCgroup is set to true Make sure the contents of
your section match the following:
#[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
# BinaryName = ""
# CriuImagePath = ""
# CriuPath = ""
# CriuWorkPath = ""
# IoGid = 0
# IoUid = 0
# NoNewKeyring = false
# NoPivotRoot = false
# Root = ""
# ShimCgroup = ""
## SystemdCgroup = true
#Finally, to apply these changes, we need to restart containerd.
#Set up the firewall by installing the following rules on the master/both the
nodes:
sudo ufw allow 6443/tcp
sudo ufw allow 2379/tcp
sudo ufw allow 2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10251/tcp
sudo ufw allow 10252/tcp
sudo ufw allow 10255/tcp
sudo ufw reload
#To allow kubelet to work properly, we need to disable swap on both machines.
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
#Finally, enable the kubelet service on both systems so we can start it.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~
###On Master Node:
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#Step 3. Setting up the cluster
#With our container runtime and Kubernetes modules installed, we are ready to
initialize our Kubernetes cluster.
#Run the following command on the master node to allow Kubernetes to fetch the
required images before cluster initialization:
#The initialization may take a few moments to finish. Expect an output similar to
the following:
#root : visudo
#su - devopsadmin
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#export KUBECONFIG=/etc/kubernetes/admin.conf
#You should now deploy a pod network to the cluster. Run kubectl apply -f
[podnetwork].yaml with one of the options listed at Kubernetes.
#Then you can join any number of worker nodes by running the following on each as
root:
#You will see a kubeadm join at the end of the output. Copy and save it in some
file. We will have to run this command on the worker node to allow it to join the
cluster. If you forget to save it, or misplace it, you can also regenerate it using
this command:
#Now create a folder to house the Kubernetes configuration in the home directory.
We also need to set up the required permissions for the directory, and export the
KUBECONFIG variable.
#mkdir -p $HOME/.kube
#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
#sudo chown $(id -u):$(id -g) $HOME/.kube/config
#export KUBECONFIG=/etc/kubernetes/admin.conf
#podsecuritypolicy.policy/psp.flannel.unprivileged created
#clusterrole.rbac.authorization.k8s.io/flannel created
#clusterrolebinding.rbac.authorization.k8s.io/flannel created
#serviceaccount/flannel created
#configmap/kube-flannel-cfg created
#daemonset.apps/kube-flannel-ds created
#Use the get nodes command to verify that our master node is ready.
#We are now ready to move to the worker node. Execute the kubeadm join from step 2
on the worker node. You should see an output similar to the following:
#If you go back to the master node, you should now be able to see the worker node
in the output of the get nodes command.
kubectl get nodes
#Finally, to set the proper role for the worker node, run this command on the
master node:
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~
###Only in Worker Nodes:
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Then you can join any number of worker nodes by running the following on each as
root: