You are on page 1of 11

Control Node Components -

1. kube-apiserver: Imagine this as the central hub of communication for your


Kubernetes cluster. It's the entry point for all interactions with the cluster,
acting as an API server. Applications and users send requests to the kube-
apiserver, which then performs actions based on them. Think of it as the gatekeeper
and interpreter for cluster operations.

2. etcd cluster: This plays a crucial role in data storage. It's a highly
available, distributed key-value store that holds all the cluster's configuration
and state information. Any changes made through the kube-apiserver are reflected in
the etcd cluster, ensuring that all components have the latest information. It's
essentially the persistent memory of the cluster.

3. kube-controller-manager: This acts as a manager for various background processes


called controllers. Each controller focuses on a specific task, like maintaining
Pod replicas, managing service endpoints, and ensuring namespaces function
properly. The kube-controller-manager supervises these controllers, making sure
they're running and handling their tasks effectively.

4. kube-scheduler: This component takes care of Pod placement. When a new Pod is
created without a specified node, the kube-scheduler kicks in. It analyzes
available nodes, their resources, and Pod requirements to find the most suitable
node for the Pod to run on. It considers various factors like resource
availability, anti-affinity rules, and node labels to make optimal placement
decisions. Think of it as the matchmaker, pairing Pods with the most compatible
nodes.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Worker Node Components -

1. kubelet: Imagine this as the agent on each worker node in the cluster. It acts
as a bridge between the control plane (the components you mentioned before) and the
actual container runtime environment on the node. Here's what it does:

Receives instructions from the kube-apiserver regarding Pod creation, deletion, or


updates.
Pulls container images from registries as needed.
Uses the container runtime engine to create and manage container lifecycles (start,
stop, restart, etc.).
Monitors the health of running containers and reports back to the kube-apiserver.
Collects resource usage data from containers and sends it to the control plane.
Think of kubelet as the remote control for each node, translating commands from the
central hub into actions on the individual machines.

2. kube-proxy: This component handles network traffic routing within the cluster.
It ensures Pods from different services can communicate seamlessly. Here's its
role:

Watches for changes in Services and Endpoints resources in the API server.
Maintains network rules (like iptables) on each node based on these resources.
Routes traffic to the appropriate Pods based on the service definition and Pod IPs.
Supports various networking modes like iptables and CNI plugins (Container Network
Interface) for more flexibility.
Think of kube-proxy as the traffic cop, directing network flow based on service
definitions and ensuring smooth communication between Pods.
3. Container Runtime Engine: This is the software responsible for the actual
creation and execution of containers on the node. Kubernetes itself doesn't manage
containers directly; it relies on a separate container runtime engine. Some popular
options include:

Docker Engine: The original and still widely used container runtime.
containerd: A newer, lightweight container runtime used by many Kubernetes
distributions.
CRI-O: A Kubernetes-specific container runtime focused on security and efficiency.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

1. CRI (Container Runtime Interface):


Think of CRI as a standardized communication protocol between Kubernetes
(specifically the kubelet) and various container runtime engines.

This standardization allows Kubernetes to use different container runtime engines


without needing to be tied to any specific implementation.
Popular CRI implementations include:
cri-o: A lightweight, Kubernetes-specific runtime focused on security and
efficiency.
containerd: A newer, general-purpose runtime used by many Kubernetes distributions.
Docker Engine: The original Docker runtime, also with a CRI implementation.

2. OCI (Open Container Initiative):


OCI to create open industry standards for container formats and runtimes.

imagespec: This defines the structure and layout of container images, ensuring
portability and consistency across different runtime engines.
runtimespec: This defines how container runtimes should operate, including process
management, resource allocation, and isolation.
distribution-spec: This defines how container images are distributed and registered
in registries.

3. imagespec:
This OCI specification outlines the structure of a container image.
It consists of layers, where each layer represents a filesystem snapshot with
changes on top of the previous layer.
This layering approach allows efficient image creation and distribution by only
sending the changed layers instead of the entire image each time.
The imagespec also defines metadata associated with the image, such as entrypoint,
environment variables, and user information.

4. runtimespec:
This OCI specification defines the behavior and interface of container runtimes.
It specifies how a runtime should start, stop, pause, resume, and manage the
lifecycle of a container.
It also defines resource allocation, isolation, and security aspects of container
execution.
By standardizing the runtime behavior, OCI enables interoperability between
different runtime engines and tools.

Summary -
CRI provides a communication layer between Kubernetes and container runtimes.
OCI defines standards for container image formats (imagespec) and runtime behavior
(runtimespec).

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Three different types of CLI toools for containerd are cri, nerdctl, crictl

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

ETCD Commands -
For example, ETCDCTL version 2 supports the following commands:

etcdctl backup
etcdctl cluster-health
etcdctl mk
etcdctl mkdir
etcdctl set

Whereas the commands are different in version 3

etcdctl snapshot save


etcdctl endpoint health
etcdctl get
etcdctl put

To set the right version of API set the environment variable ETCDCTL_API command
export ETCDCTL_API=3

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Pods with yml =


apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp

spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
kubectl run nginx --image-nginx #run a pod with nginx image
kubectl get pods
kubectl describe pods
kubectl apply -f pod.yml
kubectl edit pod <pod-name>
# Get commands with basic output
kubectl get services # List all services in the namespace
kubectl get pods --all-namespaces # List all pods in all namespaces
kubectl get pods -o wide # List all pods in the current
namespace, with more details
kubectl get deployment my-dep # List a particular deployment
kubectl get pods # List all pods in the namespace
kubectl get pod my-pod -o yaml # Get a pod's YAML
kubectl delete pods --all
kubectl delete pods --all -n <namespace>
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Replicaset -
kubectl create -f replicaset-definition.yaml
kubectl get replicaset
kubectl delete replicaset myapp-replicaset
kubectl edit replicaset <replica> and kubectl replace -f replicaset-definition.yaml
kubectl scale --replicas=6 -f replicaset.yaml
kubectl scale rs new-replica-set --replicas=5

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Deployment -
kubectl create -f deployment.yml
Create a deployment - kubectl create deployment --image=nginx nginx
Generate Deployment YAML file (-o yaml). Don't create it(--dry-run) - kubectl
create deployment --image=nginx nginx --dry-run=client -o yaml
Generate Deployment YAML file (-o yaml). Don't create it(--dry-run) with 4 q (--
replicas=4) - kubectl create deployment --image=nginx nginx --dry-run=client -o
yaml > nginx-deployment.yaml
In k8s version 1.19+, we can specify the --replicas option to create a deployment
with 4 replicas - kubectl create deployment --image=nginx nginx --replicas=4 --dry-
run=client -o yaml > nginx-deployment.yaml

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Services -

NodePort -
kubectl create service nodeport my-service --tcp=80:80 --dry-run=client -o yaml

A NodePort service exposes a set of pods to the outside world (external network)
via a static port on each node in the cluster. When an external client connects to
this port, traffic is forwarded to the service and then to one of the pods.
NodePort services are typically used when you need to expose your application
externally or to a specific group of users or systems outside of the Kubernetes
cluster. They are often used for testing or development purposes, and not
recommended for production use.

apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: my-service
name: my-service
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: my-service
type: NodePort
status:
loadBalancer: {}

ClusterIP -
kubectl create service clusterip my-service --tcp=80:80 --dry-run=client -o yaml

A ClusterIP service exposes a set of pods to other pods and services within the
Kubernetes cluster via a virtual IP address. This virtual IP address is only
reachable from within the cluster, and traffic is load-balanced between the pods
associated with the service.
ClusterIP services are commonly used when you need to expose your application
internally or to other services within the cluster. They are often used for web
services or APIs that need to communicate with other services in the same
Kubernetes cluster.
LoadBalancer -
kubectl create service loadbalancer my-service --tcp=80:80 --dry-run=client -o yaml

A LoadBalancer service exposes a set of pods to the outside world (external


network) via a load balancer that is provisioned by a cloud provider or a hardware
device. When an external client connects to the load balancer, traffic is forwarded
to the service and then to one of the pods.
LoadBalancer services are typically used when you need to expose your application
externally and want to distribute the traffic across multiple nodes in the
Kubernetes cluster. They are often used for web applications or APIs that require
high availability and scalability.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Namespaces -

In Kubernetes, a namespace is a way to create virtual clusters within a physical


cluster. Namespaces are used to organize and isolate resources such as pods,
services, and replication controllers in a Kubernetes cluster.
Namespaces are particularly useful in large organizations where multiple teams may
be working on different projects, each with its own set of resources. With
namespaces, each team can have its own isolated virtual cluster, with its own set
of resources and permissions.

kubectl get pods --namespace=kube-system = to get information about pods from diff
namespace
kubectl create -f pod-definition.yml --namespace=dev = to create pod in a specific
namespace
kubectl create namespace brahma = to create a new namespace
kubectl config set-context $(kubectl config current-context) --namespace=dev = to
switch to the namespace permanently
kubectl get pods --all-namespaces = to get pods information from all namespaces

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Imperative and Declarative -


Imperative commands directly manipulate the state of a cluster. The user specifies
the exact steps needed to create, modify, or delete a resource. These commands are
executed one at a time, in the order specified by the user

kubectl run nginx-pod --image=nginx:alpine


kubectl run redis --image=redis:alpine --dry-run=client -o yaml > redis_pod.yaml
kubectl expose pod redis --port=6379 --name redis-service
kubectl create deployment webapp --dr-run=client -o yaml > webapp.yml
kubectl run httpd --image=httpd:alpine --port=80 --expose

declarative configuration files describe the desired state of a resource. The user
specifies the desired state, and Kubernetes works to make that state a reality. The
configuration files are typically written in YAML format and include all the
necessary information for creating, modifying, or deleting a resource

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Scheduling -

Manual Scheduling -
Add a pod to a node manually -
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
nodeName: node01
containers:
- image: nginx
name: nginx

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Labels and Selectors -


kubectl get pods --selector env-dev
kubectl get pods --selector env=dev --no-headers | wc -l
kubectl get pods --selector env=dev
kubectl get pods --selector bu=finance
kubectl get pods --selector bu=finance --no-headers | wc -l
kubectl get all --selector env=prod --no-headers | wc -l
kubectl get all --selector env=prod,bu=finance,tier=frontend

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Taints and Tolerations -


Q. Do any taints exist on node01 node?
A. kubectl describe node node01 | grep -i Taint
Q. Create a taint on node01 with key of spray, value of mortein and effect of
NoSchedule
A. kubectl taint node node01 spray=mortein:NoSchedule
Q. Create another pod named bee with the nginx image, which has a toleration set to
the taint mortein
A. apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: bee
name: bee
spec:
containers:
- image: nginx
name: bee
resources: {}
tolerations:
- key: "spray"
operator: "Equal"
value: "mortein"
effect: "NoSchedule"
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Q. Remove the taint on controlplane, which currently has the taint effect of
NoSchedule
A. kubectl taint nodes controlplane node-role.kubernetes.io/control-
plane:NoSchedule-

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Node Selectors -
Labelling Node - kubectl label nodes node01 size=large
spec:
containers:
- image: nginx
name: bee
resources: {}
nodeSelector:
size: large

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Node Affinity -
Q. Node affinity label examples -
A. Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=node01
kubernetes.io/os=linux
Q. Apply a label color=blue to node node01
A. kubectl label node node01 color=blue
Q. Set Node Affinity to the deployment to place the pods on node01 only
A. affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: color
operator: In OR Not In
values:
- blue

Node Affinity Types -


Available -
requiredDuringSchedulingIgnoredDuringExecution
preferredDuringSchedulingIgnoredDuringExecution

Planned -
requiredDuringSchedulingRequiredDuringExecution
preferredDuringSchedulingRequiredDuringExecution

Q. Create a new deployment named red with the nginx image and 2 replicas, and
ensure it gets placed on the controlplane node only.
Use the label key - node-role.kubernetes.io/control-plane - which is already set on
the controlplane node.
A. spec:
containers:
- image: nginx
name: nginx
resources: {}
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Resource Limits -
Requests specify the minimum amount of resources that a container needs to run,
while limits specify the maximum amount of resources that a container can use.

For example, let's say you have a container that requires at least 1 CPU and 512MB
of memory to run properly, but it may need more resources depending on its
workload. In this case, you would set the request for CPU and memory to 1 and
512MB, respectively, and the limit for CPU and memory to a higher value, such as 2
and 1GB, respectively

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
resources:
requests:
cpu: "1"
memory: "512Mi"
limits:
cpu: "2"
memory: "1Gi"

1 CPU means = Mi means = Mebitite


1 AWS vCPU
1GCP Core
1 Azure Core
1 Hyperthread
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Daemon Sets -
In Kubernetes, DaemonSets are a type of controller that ensures that a particular
pod runs on all or some of the nodes in a cluster. They are used for deploying
system daemons or other system-level agents that should run on all nodes, such as
log collectors, monitoring agents, or storage agents.
A DaemonSet creates and maintains a copy of a pod on each node in the cluster,
which allows the system-level agents to operate on each node in the cluster. If new
nodes are added to the cluster, the DaemonSet automatically creates new pods on
those nodes. If nodes are removed, the DaemonSet automatically terminates the pods
running on those nodes.

kubectl get daemonsets


kubectl get daemonsets --all-namespaces
kubectl get daemonsets --all-namespaces --no-headers | wc -l
kubectl describe daemonset kube-proxy -n kube-system
kubectl create deployment elasticsearch --image=registry.k8s.io/fluentd-
elasticsearch:1.20 -n kube-system --dry-run=client -o yaml > Fluentd.yaml
kubectl apply -f Fluentd.yaml

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Static Pods -
A static pod is a pod managed directly by the kubelet daemon on a node, rather than
being managed by the Kubernetes API server and controller manager. Static pods are
defined by YAML or JSON files placed in a directory watched by the kubelet on each
node. When the kubelet detects a change to a static pod file, it creates or updates
the corresponding pod on the node.

Static pods are typically used for system-level daemons that should run on every
node in a cluster, such as a network or storage agent. Static pods are useful in
situations where running a daemon as a regular Kubernetes deployment or daemonset
is not desirable or practical.
To use this definition file as a static pod, it should be saved to a directory
watched by the kubelet on a node, such as /etc/kubernetes/manifests.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Multiple Schedulers -
In Kubernetes, multiple schedulers allow you to define and use alternative
scheduling algorithms besides the default scheduler. This can be useful in
scenarios where you want to prioritize certain types of workloads or use a custom
scheduler that meets your specific needs.
apiVersion: v1
kind: ConfigMap
metadata:
name: my-scheduler
namespace: kube-system
data:
scheduler.yml: |-
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
leaderElection:
leaderElect: true
profiles:
- schedulerName: my-scheduler
plugins:
preFilter:
enabled:
- name: NodeResourcesFit
- name: NodeName
filter:
enabled:
- name: NodeSelector
postFilter:
enabled:
- name: DefaultPreemption
score:
enabled:
- name: NodeResourcesLeastAllocated
weight: 1
kubectl apply -f my-scheduler-config.yaml
kubectl get pods -n kube-system | grep my-scheduler = Verify that the new scheduler
is running

Once you have created and deployed a new scheduler, you can use it to schedule your
workloads by adding a scheduler name to the spec section of your deployment or pod
definition file

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
schedulerName: my-scheduler
containers:
- name: my-container
image: my-image
ports:
- containerPort: 80

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Logging & Monitoring -
Heapster is deprecated
Metric Server - Metric Server is a component responsible for collecting resource
metrics such as CPU and memory usage from nodes and pods. It provides these metrics
to other components, such as the Horizontal Pod Autoscaler, which uses the metrics
to automatically scale the number of replicas for a specific deployment based on
the observed resource utilization.

git clone https://github.com/kubernetes-sigs/metrics-server.git


cd metrics-server
Edit the deploy/1.8+/metrics-server-deployment.yaml file to add the --kubelet-
insecure-tls flag to the command section under spec.template.spec.containers.args.
This is required to allow the Metric Server to connect to the kubelet's HTTPS
endpoint without verifying the TLS certificate.
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server/metrics-server:v0.5.0
command:
- /metrics-server
- --kubelet-insecure-tls # Add this line
- --kubelet-preferred-address-types=InternalIP

kubectl apply -f deploy/1.8+/

kubectl get deployment metrics-server -n kube-system


kubectl get pods -n kube-system # Ensure metrics-server pods are running

kubectl top node


kubectl top pods

kubectl logs webapp-1

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Application Lifecycle Management -


ROlling Updates -
kubectl create -f deployment-definition.yml
kubectl get deployments
kubectl apply -f deployment-definition.yml
kubectl set image deployment/myapp-deployment nginx-nginx:1.9.1
kubectl rollout status deployment/myapp-deployment
kubectl rollout history deployment/myapp-deployment
kubectl rollout undo deployment myapp-deployment

You might also like