Professional Documents
Culture Documents
Install Kubernetes CRI-O Container Runtime On CentOS 8 CentOS 7
Install Kubernetes CRI-O Container Runtime On CentOS 8 CentOS 7
Install Kubernetes
Prepare Kubernetes Servers
The minimal server requirements for the servers used in the cluster are:
2 GiB or more of RAM per machine–any less leaves little room for your apps.
Full network connectivity among all machines in the cluster – Can be private or
public
Since this setup is meant for development purposes, I have server with below details
Login to all servers and update the OS.
If you have SELinux in enforcing mode, turn it off or use Permissive mode.
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
Configure sysctl.
Docker
Containerd
export VERSION=1.25
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.op
ensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_8/devel:kubic:libcontai
ners:stable.repo
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo http
s://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/Cen
tOS_8/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
sudo yum install cri-o -y
# Install packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io
Using Containerd
Below are the installation steps for Containerd.
# Install containerd
sudo yum update -y && yum install -y containerd.io
# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
Configure Firewalld
I recommend you disable firewalld on your nodes:
If you have an active firewalld service there are a number of ports to be enabled.
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5
a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:37:
34Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
We now want to initialize the machine that will run the control plane components which
includes etcd (the cluster database) and the API Server.
--control-plane-endpoint : set the shared endpoint for all control-plane nodes. Can be DN
S/IP
--pod-network-cidr : Used to set a Pod network add-on CIDR
--cri-socket : Use if have more than one container runtime to set runtime socket path
--apiserver-advertise-address : Set advertise address for this particular control-plane no
de's API server
Create cluster:
Note: If 192.168.0.0/16 is already in use within your network you must select a different
pod network CIDR, replacing 192.168.0.0/16 in the above command.
Container runtime sockets:
You can optionally pass Socket file for runtime and advertise address depending on
your setup.
....
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
Then you can join any number of worker nodes by running the following on each as root:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
$ kubectl cluster-info
Kubernetes master is running at https://k8s-cluster.computingforgeeks.com:6443
KubeDNS is running at https://k8s-cluster.computingforgeeks.com:6443/api/v1/namespaces/kub
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Additional Master nodes can be added using the command in installation output:
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org crea
ted
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org create
d
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org cr
eated
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org cr
eated
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org
created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org crea
ted
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcal
ico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org create
d
The join command that was given is used to add a worker node to the cluster.
Output:
If the join token is expired, refer to our guide on how to join worker nodes.