You are on page 1of 282

"

÷
l

.
ÎIÏË ÏÎÏËÈË ÏÏËËÏËIÏËÏÎË
":!¥ : Ï ÷: ¥:
: :*

Understand in g
in a
¥
Mrad

*ÏË:¥ :¥ ï ;÷Ï Ë÷¥÷÷¥÷:±÷;÷÷ :


::$:*:$*

visual
:*

Kubernetes

way
⇐*☒

Learn & dis cover Kubernetes in sketch notes


- with some ttiippss included -

Aurélie Vache

ÏÊÏIÏ:¥ÏÏ:ËË÷:;Ï:ËÏÎËÏËÏËÏ ËÏÎI÷Ï ÷±ÏË÷ ÷ÏË☒ ÏË


"
BREAM
"
;:¥Ï±:¥ ÷;!ÏË÷¥ ÷ :÷ :÷
:*
::* ::**
:*

Understand in g Kubernetes
in Visual
a
way

Aurélie Vache
Seal
thanks to o

Laurent, husband, Alexandre, my son, and all



my

the caring people who believe in me


everyday -
Thank so much!!! 66
you

Reviewer
-

Thanks to Gaelle Acas, Denis Germain& Stephane


Philippartwho took the time to review this book.

Gagelog:

quest
Release Date 8 31/05/2020

in
29/04/2023

Changelog Release
1.27

Length 8 279pages
Kubernetes 8 1.27

Licence?
-

Creative Commors BY. NC_NB


2
http://creativecommons.org/licenses/by-nc-nd/3.0/
msamansangiesmrabmienai ⑮
Ma t
contents

Kubernetes components ⑦
/

Kubeconfig file ⑲
E

Kubecte tips ⑭
/

Namespace ⑲
I

Resource Quota ⑳4
4

⑩.
Pod] ⑯
2


Lifecycle E

Defetion ⑬
I
Liveness & Readiness probes ⑬4
/

Startup probe ⑪
2

Container lifecycle events II ⑬


S
Execute commands in containers ⑭
Init Containers -
Job ⑮ -

⑱.
CronJob ⑰
4

Config Map ⑦ 3
8Secret ⑰
S

Deployment) ⑰
I

Rolling Update ⑧
S

Pull
images configuration ⑩
I

Replicasel ⑲4
3

DaemonSet 95 I

Service ⑩E

Label & Selector ⑩


/

Ingress ⑮

PV, PVC& StorageClass ⑫


4

Horizontal Pod Autoscaler (HPA)


⑬31 ↳

LimitRange ⑬
Resource's
requests & Limits No
Pod Disruption Budget (PDB) ⑲
I

Quality of Service (QoS) ⑮


/

Network Policies ⑮
4

RBAC ⑭
Pod Security Policy (PSP) ⑰ L

Pod (Anti) Affinity & Node (Anti) Affinity ⑰


4
Node) ⑲ I

Lifecycle ⑮
Node
operations ⑮
Debugging/Troubleshooting ⑲
Kubecle convert ⑳
Tools) ⑪
Kubeckx ⑪
Rubens ⑫e
.
Stern ⑬ ↑

. Krew ⑭
⑳- 195 ⑳2

1435

Shaffold ⑱
Kustomize) ⑲e
Tips ⑳
8Kubeseal ⑫es
-Trivy ⑰i
Velero
·&


Popeye ⑬B
·

5
Kyverno ⑬
3

Changes) ⑬
E

Kubernetes 1.18 ⑬
Kubernetes 1.19 ⑭
Kubernetes 1.20 ⑪
Kubernetes 1.21 ⑭9
Kubernetes 1.22 ⑮
Kubernetes 1.23 ⑮
Kubernetes 1.24 ⑮
Kubernetes 1.25 ⑯
Kubernetes 1.26
⑯i
Kubernetes 1.27

*
Docker
S

deprecation ⑰5
Glossary ⑰
In the collection
same

6
Ë¥ËËË

É¥API-

Ï ÏËËÏËÏ I
"

lˢcheÏ^
"☒

☒☒ *☒
Node

7
Es
→ Distrib uted hey - value database
→☒

→ stores & repli cates cluster state


•☒

→ Distrib uted consensus based the model


on
quorum

Master Point Failure µ


'

s etcd : single of

API-Î{

Î""""
→ Exposes Kubernetes API

→ Kubernetes Control Plane Frontend

talk ing to it 's cool µ


Euery one is me ,

I'm designed to s cale horizontal ly ,

so balance traffic between instances


you can .

8
Schechter
→ Responsible for find ing the best Node for nearly
created Pods

→ Existing Nodes need to be fietered according to

the specific schedule ing requirements

:
-

serrer No problem ,

÷:: [[
my mission is to find a

I want a
sui table Node for
this new Pod

www.Man-ager/y
-

to make arder to
→ Responsible changes in more the

Current state to the des ired state

9
→ Runs several separate controller processes :

Tots
→ Watches Nodes & do action

when they are down

fReplicat
→ en sures the desiree number of Pods are

always up and available

flairer/
pop
ulates the end point Objects
( responsible for Services & Pods connections )

nfservicertcount/

&

ÏË
→ create default accounts and

API access tokens for new


namespace
10
Knbeletj
→ Responsible for running State on each Node

→ Checks that all containers on each Node

are
healthy
→ Can create and delete Pods

→ One instance of Kubelet per Node

|☒ù|
**
Oh, oh ! Ineed

A Pod

failed ! | one
!

Ï÷-
Prox
→ Runs on each Nodes

→ Allows network communication to Pods 11


ËÏ¥ÏËf file

Ë¥Ë in
my .
ns
namespace

¥0 $ kubectl get pods -n my-ns

bui
Ok
,

which cluster

¥9
→ Kubectl hnows where ( in which cluster )

to execute the Command thanks to Kube config file

→ Kube config
files are structured in YANL files

ü 12
Now herbe config files are loaded ?
-

① -
-
herbeconfig flag , if specified

② KUBECONFIG environnent variable


, if set

③ $ HOME / kube / config


.

, by default

Youcan.tjustap@endJYAnLbewnfigfilesinonehubeconfigfile.i
i
Tips
Etr
'

It to files
s
possible specify several in

KUBE CONFIG environnent variable :

$ export KUBECONFIG=file1:file2:file3

13
" Answer
8
TKubecte
iKuetips
'
supports one minor
mmeu
version

of Kubernetes API ( older and newer


-Server
)
Example
8
-

API Server 8 1.23


.
Kubectl 1122
1123 .24
or
:
Kubecte
CLI

logic
IIs Kubecre
Avery
=
14
?
<
.Ltaay
"
M' deploy
"
to 5 repli cas

µ $ kubectl scale deploy my-deploy —-replicas=5

tËÊ the Command ls

$ kubectl exec my-pod -it —- ls

?
¥
,

$ kubectl api-resources —-namespaced=false

ttaU
that have the status ! =
"

Running
"

✓ ?
$ kubectl delete po
—-field-selector=status.phase!=‘Running’ ↓! 15
?
t'es ordered by status

$ kubectl get pod —-sort-by=.status.phase

Iwantthe@stf1Namespaces.onlytheir name


?
$ kubectl get ns
-o custom-columns="NAME":".metadata.name" —-no-headers

?
.ᵗt in
my
- service Service

↓! $ kubectl patch service my-service —-type json


-p '[{"op": "remove", "path": "/spec/selector/version"}]'

16
This option aims to bisten for changes to a particular object

Example :
$ kubectl get pod —w

This kubectl option includes additional informations

Example :
$ kubectl get pod my-pod -o wide

This kubectl option display s


only the type
& the name
of the resource

Example :
$ kubectl get secret
-l sealedsecrets.bitnami.com/sealed-secrets-key -o name
17
cc
This option allows to delete
only resource

( not child Objects )

Ee :
$ kubectl delete job my-job cascade=false -n my-namespace

18
Ë:¥Eïf

°

per project
→ Way of isolation
§ > per

per
team

family of component

| | |
⑤ï* ⊕ï*
"*

.*
.

*
.

⊕æ* **
**
"*

-- meme

projectA project B project c kube system-

→ Resources normes must be unique inside a


namespace ,
but can be ident ical a cross
namespace :
→☒

SVC / my SVC SVC / my

| ÏïEü
- - SVC

pod / my pod pod / my pod2


*
*
- -

. ma

-
deploy / my.de p*Ëy
-

nsl 2
my -

my
- ns

19
Each
- resource can
appear in
only one
namespace

A If a
namespace is deleted all
resources inside

,
.
are deleted roo

¤ projectA

fotall recorrecare navespacn

Necke pr
a"
,rsmoo.
,
.. A Pod in names
pace A
can read
't
a secret from namespace B

ol 20
Special namespace :

default → namespace by default


kube system
-

→ for objects created by Kubernetes

hnbe public
-
→ reserved mainly for cluster use

& in case specific resources

should be publicly available

kube node lease Node


'

heart beat lease


-
-

→ s object

'

HewtbeatË

oupdapes.gv.desw.SU
ability
determine the avait of a Noode .

2
forms of heartbeats :

◦ lease Object

21
ËjHuÏ
Il-7
} Switch to my _ ns
namespace

$ kubectl config set-context —-current —-namespace=my-ns

or install and use habens Tool ( p


-211 )

$ kubens my-ns

J View current namespace

$ kubectl config view | grep namespace

} List all namespace

$ kubectl get namespaces

I List all Pods in all namespace

$ kubectl get pods —-all-namespaces

Of

$ kubectl get pods —A

I List all resources types that


"
d
"

are
namespace
$ kubectl api-resources —-namespaced=true

22
"

}
"

Delete a
namespace
stucked in Termina ting state

Ëa please help me µ
'

ter-inat.no#y.stucked-nsKubernetesversion(y
bb
-
-
Works eren in Old

$ kubectl edit ns my-stucked-ns

and remove entries in


finaliser section

23
EËËËË
0

o v
°

→ Limit usage and number of resources in a


namespace

T-faqvotaise%6E-hemryiy.ae
need to
specify request & limit for Pods

24
Ëjw_
Il

} Create a Resource Quota


$ kubectl create quota my-quota —-hard-cpu=1,pods=2

} Describe a Resource Quota & display what is consume d

and the limit

$ kubectl describe quota my-quota —n my-namespace

25
ËË÷ Pod
0

Ë÷µ÷☒±⇐*÷ç *⇐ ÷☒
Smallest unit

"" " " " " " "


"
in a cluster

Can contain
several containers


One IP address

per Pod

26
?⃝
ftp.wto?--Y
Il

} Create a Pod with lousy box image in


my
-

namespace

$ kubectl run busybox —-image=busybox —-restart=Never


-n my-namespace

} Create a Pod with lousy box image


'

&
'

rien a Command env

$ kubectl run busybox —-image=busybox —-restart=Never


-n my-namespace -it —- /bin/sh -c ‘env’

} Copy a
file stored locally to a Pod

$ kubectl cp myfile.txt my-pod:/path/myfile.txt

} List all Pods & in which Nodes they are


running
$ kubectl get pod -o wide —-all-namespaces

27
ËÏ¥Ü
Pod

*.

¥1
- e-

Œï¥ {←ŒŒÈ
"

28
-
Pendi
Container images have not yet been created .

-
Runni
Bound to a Node .
All containers are created .

At least one is
running .

-
Svcceed
All containers are terminated with success .

Will not be re started .

-
rail
At least one container is in fàilure ( exited with

non -
zero code or terminale d by System ) .

29
status Pool
of could not be obtained .

?
Why
7

Tempo rary problem of communication with

the host of the Pod .

30
ËÏ¥Ë
Pod

→ A Pod can be deleted in corder to ask



Kubernetes to start and her one

( with a new
configuration or code for ex no
)

→ Sometime Pods are stucked in

Termina Ting / Unknown State

ŒÂ {
→ The délétion can be manual
Cy forced

nanuaef%6Ft.se used carefully

31
ËI↳Ï→
}

PodgracefulyIwaitforwnfigurati@8omknbe tn-P.oQhubEteln
Delete a

$ kubectl delete pod my-pod

"=
-

:ï:ï÷÷÷Ï÷ï
÷
I removed it &

deare
Pool name

*☒ *☒ *☒

32
?⃝
} Delete a Pod instant ly ( nanually forced)

$ kubectl delete pod my-pod -—grace-period=0 —-force

whenyouforcGPodfdelet.name
is automatically freed from the API Server

J Delete a Pod stick on Unknown state

$ kubectl patch pod my-pod -p '{"metadata: {"finalizers": null}}'

33
Ë:¥Ï &
Readiness

| |
Live ness & Readiness
probes should be configured
-

1AM .nl .
ludwika

Areyouae.ve?I@
c

ÜÏ live Probe : Ok

Knb④
ness

live ness Probe : OK

34
et livenessProbeiNok
t

-> Kubelet uses Liveness probes to know when to

restart a container

-> period Seconds. Ask Kubelet to perform a

Liveness probe every xx seconds

-> initialDelay Seconds: Ask Kubelet to wait xx seconds

before to
perform the first proble

-> termination GracePeriod Seconds: Ask Kubelet to

wait
before the deletion

of the container

35
->
Different types of Liveness probles exist:

·
Define a Liveness probe that sends a HTTP
request
on container's server

on
port 8080

on / healthz

et eprobe
If CTTPcode) 2000
=

and [400

apiVersion: v1
kind: Pod

spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 60
periodSeconds: 20
terminationGracePeriodSeconds: 60
36
Define a Liveness probe that execute the Command

Cat / tmp / healthy in the container

live ness Probe

knb
If return code = = 0

apiVersion: v1
kind: Pod

spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5

37
·
Liveness which connects to
Define a
proble port 8080

apiVersion: v1
kind: Pod

spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 60
periodSeconds: 30

·
Define Liveness probe that
query gRPC service
a

apiVersion: v1
kind: Pod

spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
grpc:
port: 9090
service: my-service
initialDelaySeconds: 8
periodSeconds: 12

38
⑳iness

eL
ready?

↳et ⑩
Serice

39
⑭let Treadiness
Probe:Nok

⑩ s
*
Serice
Kubelet to know when
-> uses Readiness probes a

container is
ready to start accepting traffic
nor, Kubelet the link
If can remove

between Service and Pod.

-> Readiness Probe configuration is


quite similar

to Liveness Probe

40
Startup proble

-> Hold off all the other probes until the Pod

finishes its start up

-> Give time to a container to finish its startup

terre
lienset
↳let startupProbeiOk

-starting poss

41
7 Don't test for liveness until GTTP end point

is available

apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:1.0
ports:
- name: liveness-port
containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 1
periodSeconds: 10

3
startupProbe: the app have
httpGet:
path: /healthz
port: liveness-port
min
5 (30 x 10s)
failureThreshold: 30
to
periodSeconds: 10 finish its startup

42
IËïÏË
0

"
K
events
u v
.

→ Sometime you reed to telle Kubernetes that


your
Pod should Orly start when a condition is satisfied

→ Sometime you want to execute Commands before


Kubernetes terminales a Pod

? ?
r.
O
¥

43
tart

Tr

· 44
80
il e

45
2.

ftp./HowTo?--Y
) Create a
Deployment with 3 repli cas .

Its Pod will start only when ist i o -

proxy
(
envoy )

is
ready
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-image:latest
lifecycle:
postStart:
httpGet:
path: /healthz/ready
port: 15020

46
Ë

wil berestartedasbngastheCho @ststartfails.l


App container

Create a
Deployment with a container that

exécutes a Command at the start of the Pod


spec:
containers:
- name: my-app
image: my-image:latest
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello
Kubernetes lovers"]

47
to goodbye
Sag
Wait,
you reed
to

d)

ËÏH◦Ï→
) Create a
Deployment that hills gracefully
app bef.ie the Pod terminates
my
-


spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "my-app -kill;
do sleep 1; done"]
48
Executecommandet

->
Eubectt exec command allows to run commands inside
you
a Pod in a container

-> Useful in order to debrg in container


your

tige
$hubectsexec my-pod-e my-container

m a
I i n t e
-it --sh

19. ->

$ ls -l
$ cat path/to/a/file
?

?
7 Connectto a
specific container & open an interactive shell

$ kubectl exec my-pod -c my-container -it —- sh

. it (--stdin. y s interactive shell

> ls -l
> tail -f /var/log/debug.log 49
-

ii
'

*
specify a default container

:
y

apiVersion: v1
kind: Pod
metadata:
name: my-pod
annotations:
kubectl.kubernetes.io/default-container: my-container-2
spec:
containers:
- name: my-container
image: my-image
- name: my-container-2
image: my-image-2
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done

} Connect to my container -2 container


_

( no need to specify -
c
option thanks to the annotation )

$ kubectl exec my-pod -it —- sh

50
贕f
0

o v
.

→ In it containers run
before app
containers in a Pod

→ Allows to prepare the main container (s) &


you
Separate your a
pp from the initialization log :c

→ Containers & Init containers Share the same volume


ËËËËÏÏËÏ
&oneormoreinitconLainers÷
A Pod can have one or more containers

51
gIruneachinit-@ainerssequenc.ial y
d'S
& then containers
Usual

ÏüÉË -
① container .

¥Œ ✓

② ]ËË
| Init container 2

K¥7 ✓

ÈË¥Ë¥Ë Ë ËÏËÈ¥I
☐fÏËËËËË ÎFµËËÏË containers

r r r
-

¥

F-achinitcontainersneeda-tartsuesfdybeforeexecut.mg
the next one .

52
Z If restart policy = Neve

.
ö

I
Kubelet
21
l
@ BInit containere

readyI

O instart ¤ ĎI Init container 2

G
o
II vot
Ready X
it
until
succeeds

O
ötpi
tritiuls 53
Z If restart policy Nere

.
=

:
Ö

Il
Kubelet

l
@ ÁY Init container

1
readyI

@ Áo Init container à

I vot
Ready X

OX
Bİi
ödfailed 54
→ Useful for :

D Init DB schémas

} Set up
permissions QQ

> Clone a Git repository to a shared volume ËËËËË


'

} Sand data to the main


app ☐

} Check the access


ibility of a Service ✓
000

55
?

?
) Create a Pod with
my.app
container & an init container

that checks a Service is accessible

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: my-app:1.0
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: check-service
image: busybox
command: ['sh', '-c', "until nslookup my-svc.$(cat /var/run/
secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local;
do echo waiting for my-svc; sleep 5; done"]

56
I Create a Pod with

:
o an init container that execute tos
Git clone a

lusiI share I l
aginx htme folder
o a
ngninx container

OTDI
o a sharedvolume

YmÜöps
URINGNTLLIONKEMGT
my Website M
-git
,
apiVersion: v1
kind: Pod
metadata:
name: my-website
spec:
initContainers:
- name: clone-repo
image: alpine/git
command:
- git
- clone
- --progress
- https://github.com/scraly/my-website.git
- /usr/share/nginx/html
volumeMounts:
- name: website-content
mountPath: "/usr/share/nginx/html"
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- name: website-content
mountPath: "/usr/share/nginx/html"
volumes:
- name: website-content 57
emptyDir: {}
abonne
·nike
et

Pod
A is mortal, so
everytime a new one is

starting, it will contentof


have a
fresh your

Git repository retrieved by the init container.

,
commit ↑

repository
Developers


clone the

⑳ pull (Nginx) *

Cocher image 58
Job

?

as-

) A
process that runs a certain time to completion;
batch
*
process

* backup

database
*
migration/clean

) A Job runs one or more Pods andensures a


specified
number of them successfully terminate


59
When the Pod launched
. by a Job is
finished,
its status will be completed.

the Pod failed, until


If another Pod will run

number of completions is done.

) Job can be launched by a Cron Job

sequent
ane: i n e e

Fat
X

0
:arte
foncti
S
a

$ kubectl get cronjob


NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
my-cj 0 5 */1 * * False 0 3h53m 10s

60
ËjHouÏ
-7Il

} Create a Job from boy box image ( Imperative way )

$ kubectl create job my-job -—image=busybox

} Create a Job from lousy box image ( Déclarative way )

apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:

}
backoffLimit: 5
activeDeadlineSeconds: 120 Optimal
completions: 1
parallelism: 1
template:
spec:
containers:
- name: busybox
image: busybox
restartPolicy: onFailure ) Required

} Create a Job from a [ ton Job

$ kubectl create job my-job -—from=cronjob/my-cronjob

61
) that will be deleted
Create a Job
automatically
after the time defined

apiVersion: batch/v1 1.23


kind: Job
metadata:
name: my-job-with-ttl
spec:
ttlSecondsAfterFinished: 100 I TTL Time To Live
=

template:
metadata:
name: my-job-with-ttl
spec:
containers:
- name: busybox
image: busybox

) Delete a Job (and all its child Pods)

$ kubectl delete job my-job

) Delete a Job
only
$ kubectl delete job my-job cascade=false

62
?

¥
7-restart.li

starts Pod
If the Pool failed ,
Job controller a new .

o-OnFai@eIfthePodfailedlforanyreasonJ.thePodstays
on the No de but containers will re -
run .

manea

container

63

- restart

and
Policy
cannot be setto
is
appliedto a

Always.
Pod (not a Job)

I
.
reine"Doa. (
flimit
Number
of retries before considering a Job as failed.
By defaultbackoff limitis equal to 6.

seconds
dire
Once a Job reaches the number of active Deadline Seconds,

all of running Pods will be filled:

⑯-
status: Deadline Exceeded
64
re
lebachofflimit is not
yetreached

etions
By default completions is
equal to 1, which means the

normal behavior for a Job is to run Pods until one Pod

terminates
successfully.
to 5for
If we set completions equals ex., Job controller

will spawn Pods until the number of complete Pods

will reach this value.

elism
parallelism allows
you
to run multiple Pods at the

same time.

By default parallelism is setto 1.

65
S
riousis
definede
re

66
CronJobs.

-
CronJob
La
-> Creates Job on a time-basedschedule

Ze
·000


-> Or periodically

·000
Elietowake up

-
⑳ 67
Fat
X
-> Basedon Iron
format a

$ kubectl get cronjob


NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
a

my-cj 0 5 */1 * * False 0 3h53m 10s

?
) Create a CronJob from busybox image (Declarative way)
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
valid time zone
timeZone: "Europe/Paris" 3 a

jobTemplate:
spec:
template:
spec: 3 number of failed Jobs
failedHistoryLimit: 3 3 number of successful Job
successfulJobHistoryLimit: 1
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello readers
68
restartPolicy: onFailure
jerespect
e
S
recensé à
secondee

&Create a CronJob from busybox image


& run
every
minutes
5

$ kubectl create cronjob my-cj —-image=busybox


—-schedule="*/5 * * * *" -n my-namespace

) Change CronJob scheduling


$ kubectl patch cronjob my-cronjob
-p '{"spec":{"schedule": "0 */1 */1 * *"}}'

) Launch a Job from an existing CronJob

$ kubectl create job —-from=cronjob/my-cronjob my-job


-n my-namespace

69
Ë:Ü
0

÷ .


:

Ô
¥ {putnon-sensitivedatainlonfigrlapinstead.tl/
-
Don't hardcode configuration data in
app ,

Twelve Factor App pliant - corn

→ Aim : Separate configuration from Pods

to
→ Allows deploy a same
app indifferent environments

|
*
.

¥
.
*

a
☒ my
-
Cn -

prod

my - Cn -

stag
-

staging production

70
3 to create
-> ways a
Config Map
-- -
from key andvalue
I from an file
en

from a
file

71
Parto?
) Create a
Config Map from hey and value

$ kubectl create cm my-cm-1 —-from-literal=my-key=my-value


-n my-namespace

) Create a
Config Map from a file
$ kubectl create cm my-cm-1 —-from-file=my-file.txt
-n my-namespace

) Create a
Config Map from an en
file
$ kubectl create cm my-cm-1 —-from-env-file=my-envfile.txt
-n my-namespace

) Create a Pod with Config Map mounted as a volume


apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: busybox
volumeMounts:
- name: my-cm-volume
mountPath: /etc/myfile.txt
volumes:
- name: my-cm-volume
configMap:
name: config
72
J Create a Pod and load the value from variable

my -

hey in an env variable MY ENV KEY


- -

apiVersion: v1
kind: Pod
metadata:
name: my-pod-2
spec:
containers:
- name: my-container
image: busybox
env:
- name: MY-ENV-KEY
valueFrom:
configMapKeyRef:
name: my-cm
key: my-key

} Create a Pod & fill env vars from a


Config Map
apiVersion: v1
kind: Pod
metadata:
name: my-pod-3
spec:
containers:
- name: my-container
image: busybox
envFrom:
- configMapRef:
name: my-cm

73
Secrets.

->
Save sensitive data in secrets

-> Base 64 encoded "at rest

↳predI I encoded

0H
S
encoded
*

⑱8 automatically
decoded
when attached
to a Pod

74
-> 3 to create Secret:
ways a

-- -
from key andvalue
I from an file
en

from a
file

generic

-> Several types of Secret -


-> docker-registry
↳ tes

75
ËIH◦Ï→
} Create a Secret from hey and value

$ kubectl create secret generic my-secret


—-from-literal=password=‘my-awesome-password’

I Create a Secret from a file in


my namespace
-

$ kubectl create secret generic my-secret


—-from-file=password.txt -n my-namespace

} Retrieve a secret open it , extract desiree data


,

& decode it

$ kubectl get secret my-ca-secret


-o=go-template='{{index .data "ca.crt" | base64decode}}'

76
Deployment

-> Deployment handles Pod creation

-> Deployments are


responsible for Pods

& must ensure


they are
running

-> Deployment creates a ReplicaSet that creates Pods

accordingto the number of specifiedreplicas

eyment -> caserne



77
→ Features :

Rolling update / Roll out

Deployment history

78
?

?
) Create a
Deployment that creates one Pod with
nginx

image (Imperative way)

$ kubectl create deploy my-deploy -—image=nginx

) Create a
Deployment that creates 3 replicas of Pod

image) Declarative
way)

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: my-deploy
template:
metadata:
labels:
app: my-deploy
spec:
containers:
- name: nginx
image: nginx

79
Deployment


÷¥ ¥ ˵¥ Ë÷ÊË
ËË¥ËË
} Seale a
Deployment to 5 repli cas

$ kubectl scale deploy my-deploy -—replicas=5

80
ËÏ¥ËüË Deployment

→ Rolling Update allows zero - down time


during

Deployment update

→ Increment
ally update instances by
creating new one

→ Updates are version ed &


any update can be

reverted to a
previous version

81
image: busybox image: busybox:1.36.0

I
⑳ ⑳
↑ ↑
mydeploy mydeploy
↑ Create
deployment ↑ Update
image

s s
⑪ ⑳

82
image: busybox:1.36.0

⑳M ⑧ ⑧
&a
new po
is created

mydeploy

i
83
image: busybox:1.36.0

⑳m ⑳

The Pod is removed

when the new is started

mydeploy

i
84
image: busybox: 1.36.0


-
mydeploy

s thing for
Same

2nd replical Pod

85
⑳N
a -
mydeploy

s
Rollback to
and revision

86
?

?
⑪ Create a
Deployment in a
file my.
named
deploy yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: busybox
spec:
replicas: 3
selector:
matchLabels:
app: busybox
template: *
metadata:
labels:
app: my-deploy
spec:
containers:
- name: busybox
image: busybox

E
100% strategy:
of Pods will be
rollingUpdate:
created & then old Pods maxSurge: 100%
maxUnavailable: 0%
will be deleted type: RollingUpdate

② Apply manifest YARL file


$ kubectl apply -f my-deploy.yaml

87
③ Update deployment container's image name/tag
$ kubectl set image deploy my-deploy
busybox=busybox:1.36.0 —-record

④ show
deployment rolling update logs
$ kubectl rollout status deploy my-deploy


show history
of deployment creation/ updates

$ kubectl rollout history deploy my-deploy

⑥ show history of deployment creation/ updates

of the 2nd revision

$ kubectl rollout history deploy my-deploy —-revision=2


Rollback to previous deployment
$ kubectl rollout undo deploy my-deploy

88
0
:ralso
S
) Restart a
deployment / kill existing Pods and

start new Pods

$ kubectl rollout restart deploy my-deploy

) Pause then
and resume the deployment of the resource

$ kubectl rollout pause deploy my-deploy


$ kubectl rollout resume deploy my-deploy

89
Lii
Pull

inages
configuration

. In order to pull container images Kubernetes

,
need
configurations
several

s Theses configurations can


be setono

W Pod W Replicaset
W
U
Deployment Daemonset

O
t CronJob statefulset
W Job

000
Every resource
type including container
spec

90
En
-
policy

refigue
I
et

Welet

containers:
- name: my-container
image: my-registry.com/my-app:tag
imagePullPolicy: Always

E -Present
(default value):
Kubelet doesn't
pull image from registry
if image already exists in the Node

E Always:
-

Image is pulled everytime the Pod is started

E Nevers
-

Image is assumed to exist locally.


attempt the
No is made to pull image. 91
e

-
ImagePull
-
Secrets

Mikaürie et

92
Parto?

containers:
- name: my-container
image: my-registry.com/my-app:tag
imagePullPolicy: Always
imagePullSecrets:
- name: registry-secret

7 Kubernetes uses
docker-registry secret type
for image registry authentication

$ kubectl create secret docker-registry registry-secret


—-docker-server=<host>:8500
—-docker-username=<user_name>
—-docker-password=<user_password>
—-docker-email=<user_email>

7 You can also create a secret from an


existing
.

docker/config. Ison file:


$ kubectl create secret generic registry-secret
—-from-file=.dockerconfigjson=<path/to/.docker/config.json>
—-type=kubernetes.io/dockerconfigjson

93
Replicasel

-> ReplicaSet creates Pods according to the

number of specified replicas

.
loyment -> ⑱ficabet -

-> ReplicaSet can be createdmanually

or
automatically by a Deployment

forreverofReeeee
et 94
Caemonsel

-> Ensures that all eligibles Noces run a

copy of a Pod defined by a Daemon Set


↓ d d

n -
echecerait
but Daemon Set
manage theses Poals
controller will handle them instead.

-> If a new Node is addedin the cluster,

Daemon Set controller will create a Pod on it.

95
derategies

·LegUpdate
Update strategy by default.
Old hilled
Pods will be
automatically new
and ones

will be created: one Pod Node.


per

·Leete
Existing Pods need to be manually deletedin order

for the new ones to be created.

96
Parto?
*
Create a DaemonSet that creates one

busybox Pod per


Noce

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
selector:
matchLabels:
name: my-app
template:
metadata:
labels:
name: my-app
Specify DaemonSet

3
spec:
tolerations: will run on
- key: node-role.kubernetes.io/master
effect: NoSchedule Master Nodes
containers:
- name: busybox
image: busybox

97
② Create a Daemon Set that create Pods

only on desired Nodes

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
selector:
matchLabels:
name: my-app
template:
metadata:
labels:
name: my-app
spec:
nodeSelector:
my-key: my-value
containers:
- name: busybox
image: busybox

98
?

⑭tice
probes
-

DaemonSet wants to create Pods on each Node

but one of them doesn't


have enough capacities

. Pod is stuck

Pending" state

.
in

99
:
Quick solutions

ces

List all the Nodes in the cluster

$ kubectl get nodes

→ Find the Nodes that have Pods


your
$ kubectl get pods -l name=my-app -o wide -n my-ns


Compare the lists in arder to find Node in trouble .

→ Delete manual Lyon this Node ,


all the Pods

not controlled by Daemon set

100
tËï "

u
K
v
Service

( Pods are mortal )

→ Allows to reach are


application
with a
single IP address

Assigns to
→ a
group of Pods a
unique DNS name


(t -

ns.svc cluster local


my awesome-app.my
- -
.
.

↓ •Ï
123
my auesome app
- - -

s →

- •☒☒
/
Ë
456
my auesome app
-
-

\,
-


-789
my auesome app
- -

101
?⃝
-> The set of Pods targeted by a
Service is
usually
determined by a selector

E selector
I

Le
cluster local tape
my-awesome -appony-ns.svc.

↓ *
my-acesome app-123
-

device -

fr y-app

S
⑳32.4 L,
my-aresome-app.456

⑳-friay-app
my-acesome app.789
-

102
-> Several kind of Services:

I
o

O
? S -O

Port
.

·renaswane

o
↳Balancer

103
Types

o
I ⑳by
-> Exposes the service on a cluster internal IP

⑳M
spec.clusterIPS8 (Spec.ports [0]port
--
$cure


frice
-
-
1
Ityachable cluster
Iy
104
rt

→ Exposes the service on each Node 's IP

{Æ=P)8Çpec.ports[o].nodePo§
Reachableoutsideoftheclusterbyrequestingf


'

Node 1 Node 2 Node 3

105
↳Balancer
o

-> Creates an external load Balancer

-> Assigns a
fixedexternal IP address

:
--

"ra
I
et

Le
106
o
ErnalName
-> Provides an internal alias for an external (Ns name

will be redirected to the external


-> Requests name

⑳- M

my-service: my-namespace. sucoclusters local


device recom

107
ì
Headless
o

nepet Si
- ?
2.

t /

-> Useful in order to interface with other

service
discovery mechanism

-> Cluster IP
A is not allocated

-> Kube-proxy doesn't handle the service

-> Exposes all replicas as DNS


entry


*
eter t
equal
Noe
I de

108
?

?
) Create a Service that
exposes a
Deployment
on port 60

$ kubectl expose deploy my-deploy —-type=LoadBalancer


—-name=my-svc —-target-port=8080 -n my-namespace

) Create a Pod and


expose
it through a Service

$ kubectl run my-pod —-image=nginx —-restart=Never


—-port=80 —-expose -n my-namespace

⑩ranaxis, cause Cluster IP


Service
field is
clusterp None"

immutable

it
before
oll
S
109
Labels Gelactors.

els
-> Key/value pairs attached to an
object

%
Jeremy-app
->
Several objects/resources can have the

same label

my-pod-1 my-pod. 2

⑳- M

⑳- M

:re % ↑
Jeremy-app ·fremy-app
⑪ ↑

io/281.
areI
reserved for
prefixes
Kubernetes
in

core
label keys
components
110
ËI↳Ï→
> Create a
Pod with two labels

apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
version: v1
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

111
-electors
-> Selectors use labels to filter
or selectobjects/resources

-> Selectors can be

o
labels
exact match. =, = !=
=

o
ì match
Expressions

match expressions: in, notin, exists

112
Parto?
I Show labels for all of my
Pods

$ kubectl get pod —-show-labels -n my-namespace

) Create a
Deployment that will manage Pods
label app:my-app

apiVersion: v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: my-app

Lequality
spec:
selector: based
matchLabels:
app: my-app

113
) Create a
Deployment that manage Pods that
have label version value equals to un or v2

& app my -app


=

apiVersion: v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
matchExpressions:
- {key: version, operator: In, values: ["v1","v2"]} 3 sect
List Pods that have labels &
) app: my.app
version:22

$ kubectl get pod —l app=my-app,version=v2

List Pods that have labels


) app: my.app

& v1,
version= v2 or v3

$ kubectl get pod —l app=my-app,version in (v1,v2,v3)

114
Ingress

-> Allows access to
your
services from outside

the cluster

-> Avoids creating load Balancer Service


a
per

-> Single external endpoint (secured through SSL/TLS)

S ·
Ingress

-
win


d
pilvelwesomeapp


d
t


d

115
-> An Ingress is implementedby a
3rd party:

Controller
an
Ingress
↳ extends specs to

support additional
features
->
Consolidates several routes in a
single resource

116
?

?
) Create a
Nginx Ingress which
based
defines two routes :

/my-api/ull to Service
myapi-in

/my-api/val to myapi.a Service

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules: no
If
host is specified,
- host: scraly.com
http:
3
rules are appliedto
paths: inbound HTTP
- path: /my-api/v1
all
traffic
pathType: Prefix
backend:
service:
name: my-api-v1
port:
number: 80
- path: /my-api/v2
pathType: Prefix
backend:
service:
name: my-api-v2
port:
number: 80

117
ouleClass(
In order to should be handled
specify which Ingress

by controller


metadata:
name: my-gce-ingress
annotations:
kubernetes.io/ingress.class: gce

oulditypeI How HTTP path should be matched

with
Exact: Matches the URL case
sensitivity

Prefixs Matches the URL path prefix split by I

/my-api/vll matches/my-apilul
/my-api (vl/toto

118
) Create an
Ingress secured through TLS

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-with-tls
spec:
tls:
- hosts:
- scraly.com
secretName: secret-tls
rules:
- host: scraly.com

119
Reut
Ietalementation
Ingressisstill
in
aren't
homogeneous
1l
-

paratif
E ?
site approxy,
Iintodruce

told of
name

Contour's name 8
Ingress Route

rz2 fil
*

Idiegothrcho
I

pareI
I'm have understood it,
sure
you

120
Ë¥i &
Storage Class

In short :
-


Pods are mortal
by default
& Nodes Will not live forever to

À
-
Et
To store data Persistent Volume
→ permanent ly ,
use

→ Create a Persistent Volume claim to


request
this phys ical storage

→ Pods Will use this PVC as ce volume

121
Static ( with out Storage Class )
t
provisioning :

- - - - - - -
- - -

.
-
-

!
,

- ☐ - '

i volume (
^

f.
I Poet l
'
- - - - - - -
-
BÏ? -1

Namespace

¥
E
storage

122
Dynamic
provisioning

Vênê
l

-s
o PVC i

í
i
(
Poct
l i n ke a --
-

,
-
-
- -
-
-
-
-
-

I
Namespace to

dynamically
creares

v -
I
storageclasss

¿
E

It
storage

123
Persistent
Volume
-

- Providealifetine
storage location that has
a
s
in Node depend
ut any or
of
Pod

PV not sticked
s is in a
namespace

- supports different persistent storage types

:
s
NFS cloud provider specific storage system

000
,
- After creation PV has a STATus of Available

.
s
,
Means not bound to prC
yet a
.
2 sorts
-
of provisioning :
s
Ftatic T
dynamic

mme ane
X
storageclass Name storageClass Name

124
ouEeModes(

ReadWriteOnce 8 the volume can be mounted as

↳ Rwo read-write by a
single Noce

ReadWriteMany : the volume can be mounted as

↳ RWX read-write by many


Noces

ReadOnly Many : the volume can be mounted as

↳ ROX read-only by Noces


many

ReadWriteOncePod: the volume can be mounted as

·sc
read-write by a
single Pod.

⑭- ici or write to it

125
auleiePolicy(
Retain 8 Appropriate for precious data.

PV is not deletedif a user deletes the PVC.

Delete 8 Default Reclaim


Policy for dynamic provisioning.
PV is
automatically deletedif a user

deletes the PVC.

Status of the OV will change to Released

and all the data can be manually recovered.

126
?

?
) Create a PV, with NFS
type, linked to an AWS EFS

apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /
server: fs-xxx.efs.eu-central-1.amazon.com
readOnly: false

) Change Reclaim Policy for my.pr


to Retain

$ kubectl patch pv my-pv


-p ‘{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}’

127
terclass
-> In order to set-up dynamic provisioning

-> Different provisioners existe

estW
GcEPersistentDisk AzureDisk AWSEBS oo

-> volume Binding Mode: Immediate means that

volume &
binding dynamic provision will

occur when the PVC is created

?
O
?
) Create a
StorageClass disks
with SSD
for
Google Compute Engine (GCE)

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: faster
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
volumeBindingMode: Immediate
128
volaim
estent
-> Clair/Request a suitable Persistent Volume

-> Pod uses PVC to


request physical storage
-> After PVC
deployment, ControlPlane searches

suitable with the


a PV same
Storage Class
-> When PV is
found &PVC is attached,
PU status changes to Bound

?
O
?
3 Create a PVC that requests a
storage with

100 Gb of storage & read-write access


many
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: ""
volumeName: my-pv 129
#achTrod
-> A
volume is accessible for all containers

in Pod
a

-> Acontainer must mount the volume to access to it

?
O
?
3 Mount PVC as a volume in a Pod

with read
only access

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: "/data"
name: nfs-storage
readOnly: true
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: my-pvc

130
Ë¥ËËËï Pod

→ Scales the number of Pods automatic ally

on observe d CPU

- - -

131
se
autoscaling
luzbetal

-> by custom, external & multiples metrics

⑳ ⑳

Id|

132
-> Available for ReplicaSet, Deployment
&
StatefulSet

⑳ficaset
- scales
OR


&AS loyment
a

↳ ⑱freset OR
inte

ecannois scenem

133
Parto?
) Autoscate/create a
for
HPA a
deployment
that maintains an
average
CPU
usage
accross all Pods of 80%

$ kubectl autoscale deploy my-deployment


—-min=3 —-max=10 —-cpu-percent=80

134
Ifeatures
) Create a HPAwith behavior;

scale 90% Pods every 15


up of sec

scale down 1 Pod 10 min


every

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deploy
minReplicas: 3
maxReplicas: 10 1.18
targetCPUUtilizationPercentage: 80
behavior:
scaleUp:


policies:
- type: Percent
value: 90 - -

periodSeconds: 15
scaleDown:

w
policies:
# scale down 1 Pod every 10 min
- type: Pods
value: 1
periodSeconds: 600

135
L贕j
Limit

Ë⑨ :

Ë
JDefaultmemoryrequestlrli-nitvalues.GL
] for containers in a
namespace

ËÏH◦Ï→ :

> Create a
Limit
Range
apiVersion: v1
kind: LimitRange
metadata:
name: memory-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container

136
?

/rios
① Pod creation with one container
without & limit
memory request


need
toe

mm
W
4
iapiVersion: v1

limitRange esten
kind: Pod
metadata:
name: my-pod
spec:
in

containers: I
append configuration with

S
your
- name: my-container
image: nginx

prene
3 default values, now I can

schedule Node

L
in
resources: you a

requests:
memory: 256Mi I
limit:
memory: 512Mi ⑮
imitangeer
It
137
Pod creation with
⑳ one container
with
memory limitonly


need
toe

-merce
apiVersion: v1 Ok,

et
kind: Pod
metadata:

E name: my-pod
spec:
to
containers: request equals
- name: my-container
image: nginx
resources:
S
I

Grande e
limit:
memory: 512Mi
chemineme
en
requests:
memory: 256Mi

138
Pod creation with
③ one container
with
memory request only

⑳reedt e

↳Summermanente
apiVersion: v1
kind: Pod
metadata:
name: my-pod
et
spec: limitequals to the one
containers:
- name: my-container
image: nginx
resources:

-
I requests: E

En esDemit
memory: 100Mi
chemineme
en
limit:

Rangeler
memory: 512Mi

bb

139
Lwis
Resorrce

's
Requests I linits

, rids
is to wch

ther
vevery

,
00 Kille destroy
can
r
EXrę
-TER-MI-NATE
l
^
n7
@
w
w
~

I l
w
GLIL IIÜII

2"
dm
Use muchs
too
resources

ö X
oor kieled

140
-> An OOM (Out of Memory) Killer runs on each Node

-il nya
Node 1 Node2 Node 3

In order to avoidthis situation,


->
should
define requests limits Cow &
you and
Memory

-m3resources:
requests:
memory: 64Mi
cpu: 250m
limit:
memory: 128Mi

↳remmene
cpu: 500m

141
→ Scheduler uses these information to decide

in which Node the Pool can be

Mme
}
resources:
requests:
memory: 64Mi

ni cpu: 250m
limit:
memory: 128Mi

%¥Î÷: §ᵐᵐ
ï:Ë¥ï÷¥Ë
cpu: 500m

ËÏÏÏÏÏÏ÷Ï
ÏÏÏÏ

Nodet Node 2

142
the
-difference
What is

oe
between
2. La

I
REQUEST

Imemneeded
scheduled,
-

for Pod to be

>

b
LiMiT

I
-

--

Limit maximum usage


-

143
2.
?
difference
s
si
CPU is a compressible resource.

Once a container reaches the limit, it will

keep running,

Definedin millicore:1 core 1000m


=

-......................

·
Remory is a none compressible resource.

Once container reaches the


memory Limit,
a

itwill be terminated (DOM Killed).

Definedin bytes. 144


encore
carasser
et

145
?

?
) Create a Pod with definedrequests & limits

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: my-image:1.0.0
resources:
requests:
memory: 64Mi
cpu: 250m
can't be lower

3
limit:
memory: 128Mi
cpu: 500m than requests

) Display Pod's memory


& CPU consumption

$ kubectl top pod

) Display Containers's memory & CPU consumption


$ kubectl top pod —-containers

) Display Nocle's
memory & CPU consumption
$ kubectl top node

146
!
limits should be
-> Requests and defined for each

individual Pod, in order to be scaled

correctly by Horizontal Pod Autoscaler (HPA)

-> If Pods have been killed by oom Killer,

it will have the status OOM killed.


*

wikie

147

PodDisruption Budget.

onil
e
availability?
S
Unfortunately, no wo

-> PDB is a
way to increase application availability

-> PDB
provides protection against voluntary evictions :

-> Node drain

es Rolling upgrade
~> Defete a Deployment
~) and no defele a Pod i

148
My configuration:

viey
FretionI API

C
I
can delete because
this Pod, but maxUnavailable 1
:

not other ones


--
app:my. app
Gery.
appl my.
appl
3 so

e-
-
-

vol
Le u nta ry
prement disruptions

149
ËjtwÏ

Il

} Create a Pod Disruption Budget ( PDB ) that allows

Orly one
disruption
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
.

name: ingress-nginx-controller-pdb
spec:
minAvailable: 1 } how many Pods must always be available
maxUnavailable: 1} how Pods can be evicted
many
selector:
matchLabels:
app: my-app }
match only Pods with label app my app: -

It'salsoposÈ(Ëaabaa
pourcentage : minAvailable: 50%

} Show PDB in
my namespace
$ kubectl get pdb -n my-namespace

150
Quartieet

-> Scheduler uses
requests to make decisions about

scheduling Pods onto Nodes.

anedsboom:
.

ee

adetharea"
.


U
??
et


e -
151
-> Depending on
requests and limits defined,
a Pod is
classifiedin a certain QuS class.

ses
3 QuS classes?

⑳- M

⑳anteede

-> Scheduler assigns Guaranteed Pods only to

Nodes which have enough resources


compliant
with CPU & requests.
memory

-* Pods are
guaranteed to not be killeduntil

their limits.
they exzeed

152
o

.
Io Burstable

their they limit reached hen


w

hee Pods are


o n killed before
e s
Garanteed
,t
if there aren any Best Effort Pods
't
,
o
,
IoBest Effort

C o n t a i n e rs can use any of


free
amount memory

and CPU in the Node


.
If system do esn have enough resources
't
,
Pods with Best Effort Qos will be the first killed
.
153
A-ssignaspecificQosclasstoaPod.us
"

Guaranteed
☆ Request and limites must be equals
in all containers ( even Init containers )

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: nginx
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "20m"
limit:
memory: "64Mi"
cpu: "20m"

154
→"
*

Burstab ✓
☆ At least one container in the Pod must have

a
memory or CPU request defined

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: nginx
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
limit:
memory: "128Mi"

155
?⃝
?⃝
→"

BestE✓
☆ If request and limit are not set ,

for all containers Pods Will be treated as

low est Pods


priority and are
Classified
as Best Effort .

156
?⃝
ËIH◦Ï→
> Get QoS class for my pod
-
Pod

$ kubectl get pod my-pod -o 'jsonpath={...qosClass}'

157
·en &

mercantal I
everyone
Basic rule in Kubernetes?

to

Network Policies allow to control communication

between Pods in a cluster


-> Scoped by Namespaces

-> stable since 1.7 (June 2017 B)

158
-> limited to IP addresses (no domain name)

->
Pre-requisites
Use a CNF plugin which supports NP

-> Network Policies use labels to select Pods

and define rules which


specify
what traffic is allowed


&my-appare -app

~ are
aeroweda e

-
159
Parto?
-


tror-rony tremy-roley
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: my-namespace
*
spec: If empty pod Selector

3 select
podSelector:
matchLabels: all Pods in
namespace
role: my-role
policyTypes:
- Ingress
- Egress
ingress:

3
Allow connections
- from:
- podSelector:
matchLabels: to all Pods Labels
role: allow-role
ports: role: allow-role
- protocol: TCP
port: 6379 to communicate to our Pods

160

-

tengroly

3
egress: Allow connections
- to:
- podSelector:
matchLabels: for our Pods

role: allow-to-role
ports:

to communicate to Pods
- protocol: TCP
port: 5978 with labels role:allow to role

IreprfPolice
et a propos
de

161
③ Deny all Ingress traffic to our Pods

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: my-namespace
spec:
podSelector: {} 3 all Pods in
policyTypes: my-namespace
- Ingress


Allowall Egress traffic from our Pods

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
namespace: my-namespace
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- {} 3allow all

162
!
-> In Ingress & Egress, several selectors existe

- Le
pod Selector namespace
Selector ip Block

163

RBAC

:
*
eBased I
Control
Access

-> Method of regulating access


to resources based on

the roles of individual users

->
Useful when wantto control what your users can do
you
for which kind of resources in cluster
your

-> Introduces 4 Kubernetes objects


-
Role L RoleBindi
CusterRol Cluster RoleBinding

-> Operations (verbs) allowed on resources -

create
*
list
=>

* get ⑭y update
000

X delete watch
. 164
verbs (
allowed on
specific resources

->
Sets permissions within a
given namespace

Role biggerenre un

-> Grants permissions also but so cluster wide


-

&
- -
cluster scoped resources
L namespaced
accross
resources (like Pods)
all namespaces
(Noder non-resource
endpoint
(/health200

S
De certes. igeet

obj
des e
ist e namespace
e

165
ensubject a Role and a

inding
-> Grants the permissions defined in a
Robe to a user

or a set of users within a


specific namespace

->
A RoleBinding reference Role in the same
namespace
can
any

in a
namespace

-ser binding e -

il ject
a
group ⑧
Service Account ↳
↳Role

->
RoleBinding should be
A defined per namespace

-erights
Ymy-names
e

S
↑P E
↳Binding ->
Role
e
my. User view 166
my. namespace
-
Cluster
RoleBinding

-> Same as RoleBinding but

for all
namespaces (cluster wide) -

-
Allow users in the group
any
↓ my-group to read secrets in
any is

S
Y E
RoleBinding) -
eole
read-secret
my, group

rencon
aaggated
in e

167
ftijHowE_Ydlf-RokBinding6-sR@mf.vserread -

pods pod - reader

① Create a Hole that grants read Pods access

in
my namespace namespace
_

apiVersion:
rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: my-namespace
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
-
rend access

② Create a Hole Binding that grants pod - reader Role

to a user

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: my-namespace
subjects:
- kind: User
name: my-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader 168
apiGroup: rbac.authorization.k8s.io
↑P E
RoleBinding) ->
↳terRole
my. User secret. reader

① Create a ClusterRole that grants read secrets access

in all in the cluster


namespaces

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole

3
metadata:
name: secret-reader
* No metadata
namespace in

rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]

② Create a Cluster RoleBinding thatgrants secret. reacher

ClusterRole to a
group

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: my-group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io

169
nic Cluster Role

puberte
E
Account 1
Service E
↳Binding ->
↳terRole
en read-pods secret. reader
test. ns

① Create a
namespace
test -ns"

$ kubectl create ns test-ns

② Create a ServiceAccount in this


namespace
$ kubectl create serviceaccount my-sa-test -n test-ns

③ Create a
RoleBinding that grant secret-reader
to test
my-sa.
$ kubectl create rolebinding my-sa-test-rolebinding -n test-ns
—-clusterrole=secret-reader —-serviceaccount=test-ns.my-sa-test
170
④ Create a
Kubeconfig file for the created

Service Account

$ export TEAM_SA=my-sa-test
$ export NAMESPACE_SA=test-ns

$ export SECRET_NAME_SA=`kubectl get sa ${TEAM_SA} -n $NAMESPACE_SA


-ojsonpath="{ .secrets[0].name }"`
$ export TOKEN_SA=`kubectl get secret $SECRET_NAME_SA
-n $NAMESPACE_SA -ojsonpath='{.data.token}' | base64 -d`

$ kubectl config view --raw --minify > kubeconfig.txt


$ kubectl config unset users --kubeconfig=kubeconfig.txt
$ kubectl config set-credentials ${SECRET_NAME_SA}
--kubeconfig=kubeconfig.txt --token=${TOKEN_SA}
$ kubectl config set-context --current --kubeconfig=kubeconfig.txt
--user=${SECRET_NAME_SA}


Execute aubecte commands in the cluster

as the ServiceAccount

$ kubectl --kubeconfig=kubeconfig.txt get secrets -n test-ns

171
PodSecurity Policyad

eral te
->
Set of conditions/rules a Pod must follow in order to

be accepted on the cluster

-> policy
A can contain rules on:

5
Types
of
volumes
priviledged
P
Pods
Read. only filesystem

bilities
volume Kernel authorized capa

mount GID (sysche ...)

sudo
elevation(=
host Port Privileges
&9 VID /GID used by Pod
Seccomp/AppArmor profiles 172
-> By default, a Kubernetes cluster accepts all Pods

-> So, first you need to create


policies

Pol
need i c yaçai sen to enable

-> D When PSP are activated, every


pod that wants to run

on the cluster, needs to use at least one


policy

#tractice
-> APod is linked to PSP thanks to his Service Account

grants permissions

⑩-inding
⑮P
- -
.
RoleoleBinding namespace
↓ L
cluster wide
- associates a Cluster Role
permissions to a desired SA
173
-> D Only one PSP for a
Pod can be applied

-> PSP controller accept or


refuse Pods in the cluster

&Non
policies mutable

-o
I-
mutable
-
non

-
it
d
*
·
en↓ 6


lepoliton't change
non

the Pod

A
trable

⑳pe ↓

mutable
6

policy
barrerie on the Noces
et
174
Fileorder
:vernes
existe
S
-> DesenEten e
·

En
-

re
↓ 6

mulable

↳ ②
if several mirable

SE-
policies matches,
choose one ordered

by name

Calphabetically)
-

175
Parto?
⑪ Define a
policy named "my-policy"that prevents the creation

of privileged Pods

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: my-psp
spec:
privileged: false - Don't allow the creation
seLinux:

3
rule: RunAsAny of privileged Pods
supplementalGroups:
rule: RunAsAny
runAsUser:
other control
rule: RunAsAny
fsGroup:
aspects
rule: RunAsAny
volumes:
- '*' 3
Allow access to all available volumes


Create
a ClusterRole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-cluster-role
rules:
- apiGroups:
- policy ④
policy
resources:
- podsecuritypolicies ③
which is a PSP
verbs:
- use -> ⑪ Grant access
resourceNames:
- my-psp
- &
to
my.psp 176
③ Create a
RoleBinding linked to a desired

ServiceAccornt(SA)
# Bind the ClusterRole to the desired set of service accounts.
# Policies should typically be bound to service accounts in a
namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-rolebinding
namespace: my-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
bind
3
my
cluster, role

E 3
- kind: ServiceAccount
name: default
to the SA in
default
namespace: my-namespace my-namespace is

which accounts the ClusterRole is bound

OR

) Create a
RoleBinding linked to all Service Accounts
in
my namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-rolebinding
namespace: my-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts 177
④ Enable PodSecurity Policy admission controller

in an
existing cluster
GKE

$ gcloud beta container clusters update my-cluster


—-enable-pod-security-policy

178
p
↳ orAisAfficheet
-> By default, when
you deploy a Pod, Kubernetes

chooses the right Node accordingto its need

pod t ⑳

↓f . ↓1
⑳ -
*

Node
empty fre

N
↓6

De
I

jese
179
-> Several to handle node pod affinities
ways and

o
Selector
? ?
S
0 .

Affinity ↳Affinity
o S o

o
:Affinity

180
lect
a-

"

"" " " "


"
" " "
"

À
¥
:

ByÆÜhaÇ%ab
lorunonaNode÷
but you can add
your
our labels to
force ce Pod

181
o
Aspire
-> Similar to NodeSelector but more expressive syntax

Aerule: Only run the Pod on Noces that


7


match criter;as

required During Scheduling IgnoredDuring Execution


I
struled try to find the wantedNode(s)

not possible, Pod will be


If
on another Node

preferredDuring Scheduling IgnoredDuring Execution

182
Aoperators:
ded
In, Exists, Gt, Lt, NotIn, Not Exists
Does
e

"Node Anti. Affinity"

↑·I
If the label on a Node

changes at the runtime,

nothing changed,
Pod still runs in the Node
-

183
o
↳Affinity
-> Allow to run a Pod on
the same Noce

than another Pod


to
.
together
be
t

·
/Pod
/detruey A Pod B

need
pas to en

184
ì
AtiAffinity
o
Pod

->
Allow to run several replicas of a
deployment
on a
different Node

my.apy
1
S 8

⑱ma
1
Wa
t
etri b ute aber to
your
Poals on

rodes
L fone Noceaient the et

185
Parto?
& List all Nodes with their labels

$ kubectl get node —-show-labels

& Deploy a Pod in a Node that have a


specific label

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:

schedree
containers:
.
↳S
root thisre
- name: nginx
image: nginx
nodeSelector:
in a Node with given label
node: my-node

⑳ <P
0

It
A

Dyna 186
* Create a Pod with Node Affinity, that will run

label
in a Node that have a
specific
000
if possible
apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-node-affinity
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: app
operator: In
values:
- my-app
containers:
- name: my-container
image: busybox


A

de
⑩my.app,
187
& Create a pod that will run next to Pod
with label" friend true"
=

apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: friend
operator: In
values:
- true
topologyKey: topology.kubernetes.io/zone
containers:
- name: my-container
image: busybox

itige se re


4
A It


*

Serre 188
& Create a
Deployment with 3
replicas.
Each Pod must not run in the same Node.

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-db
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
#
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
containers:
- image: my-image:1.0
name: my-container

Wmy.ap ⑳ ⑳
↓ Wmga 8
↳my.ap 8



- 189
Node

-> Pods run on a Node

/
-

-> A Node is a physical or Virtual Machine (VM)

->
Overview

ren

190
↳) Container runtime;

Et
sisont

pull container

j 0
r
↳taie 2

↓unpaçainte
enelle

run
the
app

191
Noch lifecycle.

Garead
Node Controller

E
↳un
I-
-
Healthy
to accept
& ready
Pods

- I Memory Pressure?

Id ↳
cres
I ReadySarofpish?
192
↓ .
Alvebeen
"cordoned"

*
*Shearling
Disabled

193
?

?
) Show more
informations about a Node

Iconditions, capacity.o
$ kubectl describe node my-node

194
Node operations

Debug issues on Node

Why?
~
-> Upgrade
*
a Node

scale
down the cluster

Rebootthe Node
000

Fran

-
lable
->
e
I
chedule Taint

->
Evict all Pods
from a Node & mark the Noce
as unschedulable

$ kubectl drain my-node

-> Erict all Pods, also DaemonSets


$ kubectl drain my-node —-ignore-daemonsets 195
-> Erict all Pods also the ones not
managed by a controlfer

$ kubectl drain my-node —-force

-
Cordon
<

eaben ⑱
Schedulable

S
-> nake the Node unschedulable
$ kubectl cordon my-node

-> nake the Node schedulable again, undo the cordon/train


$ kubectl uncordon my-node

-
Node Laints are
key:value pairs associated
with an
effect

·chedulel
Pods that don't tolerate the taint can't be schedule on the Node
196
auferrschedulel
Kubernetes avoidschecuded Pods that don'ttolerate

the taint

autre
recute/
Pods are evicted from the Node if they are
already
running, else they can't be
scheduling

?
) Display taint informations for all Noces

$ kubectl get node


-o jsonpath='{range .items[*]}{"\n"}{.metadata.name}:{.spec.taints}'

) Pause a Node, don't accept new workloads on it

$ kubectl taint nodes my-node my-key=my-value:NoSchedule

3 Unpause a Node

$ kubectl taint nodes my-node my-key:NoSchedule-


197
) Create a
Pod that can be scheduled on a Node who have

the taint specialkey:specialvalne:No Schedule


...
spec:
tolerations:
- key: "specialkey"
operator: "Equal"
value: "specialvalue"
effect: "NoExecute"

) Create a Pod that stay bound 6000 seconds before


being ericted

...
spec:
tolerations:
- key: "my-key"
operator: "Equal"
value: "my-value"
effect: "NoExecute"
tolerationSeconds: 6000 3 Delay Pod eviction

entre
car ande cacey taint de

Node
* is not
ready
When? * Node is unreachable
I
* Out of disk
* Network unaivalable 198
000
VorReady eggs. Pending

*
s"
e
When errors occur, debugging Kubernetes

?
Wing
?

e
2.

Et

Sed
199
⑩bre
information about your Pods

-> Display all Pods in a


namespace "my.ns"
$ kubectl get pod -n my-ns

-> Display information about a Pod


$ kubectl describe pod my-pod -n my-ns

These two easy commands can be


helpful in
many situations,

for example when a Pod is stuck in Crashloop Back Off


statuss

⑲-

exibroder

↳acha
i mated
*

200
⑳ stuck in Pending status


A
W
Pod can'tbe
scheduled

E
on a Node

:Solutions
I

Resource
* limitare maybe) than cluster capacities

Delete
* Pods / Clean unused resources in the cluster

AddNodes
*
capacities

Add
* more Nodes

000

201
③cluster
messages error

-> Display alt events for namespace


my.ns
$ kubectl get events
—-sort-by=.metadata.creationTimestamp -n my-ns

-> Display only Warning messages

$ kubectl get events —-field-selector type=Warning

↳Freeurera quand
et

202
② in
Waiting/ImagePrelBackOff status

po

Cegistry.com/project/my-app:1.
*

e
Questions
I

Image
* name, tag & URL are good?

Image
* exists in the Docker registry?
W Can
you pull it?

Kubernetes
* have permissions to pull the image?

ImagePull Secret
* is
configuredfor secret
registry?

203
1.18
⑤ Kubecte debug
-

1ha
~
->
Run an
Ephemerat Container near the Pod

we wantto debug

In this container execute cure, wget (000


-> you can

commands inside Kubernetes environment

-> Pre-requisites active feature gate


-
Ephemeral Containers =
true

&

L application container

⑲ r

⑭1debug container

$ kubectl alpha debug -it my-pod —-image=busybox


—-target=my-pod —-container=my-debug-container
e -
Share process debug container name
a
namespace
with a container inside the Pod

de
debug container can see all
204
processes createdby my-pod
⑲De
been restartedmultiple times

Pods too much


-> If use
memory
oorkiller can
destroy them

EX-TER-E
E
We

L
-
M

-
-
-

I↳I
-
&II 1 II

%8ma
uses rie

⑳ ⑳ -
okieled
:Solution
I

Define requests
* limits
and
memory
205
⑦ can'tstarto

⑳ ⑳
S
amner
* Error acreating *


-
--
..-
-


->

ConfigMap Secret

When
you
link a
Pod to a
Config Map and for a Secret,

attention to create the linked resources first


pay
to the name
you used
for linked resources

⑳Running
don't know why it's not working? but I

You can
simply watch Pod logs in order to
try to understands

$ kubectl logs my-pod

Poste
se re
206
mais
restarting over andover so

LivenessProbe to tell container


-> is used is
healthy,
but if it's misconfigured container will be

consired as
unhealthy.
Possibleissues
-

Probe
*
targetis accessible?

*
Application take a
long time to start

⑳access
my
Pod without external an

balancer

0
euget
!x- ↳

mount tunnel between


-> You can a a Pod

and
your computer:
$ kubectl port-forward my-pod 8080 207
-> Andmount a tunnel to a Service
directly:
$ kubectl port-forward svc/my-svc 5050:5555

anciede
Report port are the

same, specify just one

Then simply cure to local host


-> you can on

the local port o

$ curl localhost:8080/my-api

208
Zubecee


1.21

Kubernetes have 3 releases


-> every year
or

-> Several APIs are deprecated 12e

& then removed

I
hat

0
racine
jete
S
S
en
-
-
-- --
-- --
..- ..-
- -

⑱ ⑱

deprecated API
API
209
?

?
) Convert to
my-ingress with old API v1

$ kubectl convert -f my-ingress.yaml


—-output-version networking.k8s.io/v1 > my-ingress-updated.yaml

3 Convert to latest
my-pod version

$ kubectl convert -f my-pod.yaml

210
Tools

I
I Rubeatx I
I &
Manage switch

between hubecte contexts

https://github.com/ahmetb/kubectx

List all the existing clusters in


your KUBECONFiG
$ kubectx

Switch /connectto a cluster


$ kubectx my-cluster

Switch to the previous cluster


$ kubectx -

: $ kubectl config use-context my-cluster

211
I
I-
Rubens
I
Manage & switch

between
namespaces,

https://github.com/ahmetb/kubectx

List all the the cluster


namespaces in

$ kubens

Switch to a
namespace
$ kubens my-namespace

e $ kubectl config set-context —-current


—-namespace=my-namespace

212
I
↓-
/
stern

↓L
Kubectl logs
under Steroids

https://github.com/wercker/stern

Display logs of all Pods starting by a "name"


$ stern my-pod-start

Show Pods logs since 10 seconds

$ stern my-pod -c my-container -s 10s

Show Pods's container with timestamp


$ stern my-pod-start my-container -t

e $ kubectl logs -f pod-1


$ kubectl logs -f pod-2

213
I
I
1
Grec
-
1 !
Package manager
for kubecte plugins 3

https://github.com/kubernetes-sigs/krew

List available plugins (in the krew public index)


$ kubectl krew search

Install Rubecty andRubens tools


$ kubectl krew install ctx
$ kubectl krew install ns

bi s
ellation,esare avai a be
$ kubectl ctx
these

$ kubectl ns
as

Display installedplugins
214
$ kubectl krew list
'

Add
scraly private
'

index

$ kubectl krew index add scraly


https://github.com/scraly/krew-index

List all installed index


$ kubectl krew index list

List all available plugins index


'

scraly
'

in

$ kubectl krew search scraly

Install plugin
" "

season

"

( available scraly private index)


'

in

$ kubectl krew install scraly/season

plugin
"

Upgrade
"

season

$ kubectl krew upgrade season

215
"
Et
le 9s

ˌȥƥËᵉËÆ*ËÊÆ
LE
https://github.com/derailed/k9s

→ UI in CLI for Kubernetes

→ Kgs continu ally Watches for changes

Launch fr95
$ k9s

Run h 9s in a
namespace
$ k9s -n my-namespace

216
t
LightweightKubernetes
cluster s

https://k3s.io

Install & launch 23s Kubernetes cluster


$ curl -sfL https://get.k3s.io | sh -

217
I
I
~

At
/
I skaffold
CC

Build, push & deploy


in one CLI

https://skaffold.dev

Init
your project (and create shaffold, yame)
$ skaffold init

Build, tag, deploy & stream the logs everytime


the code of your app changes
$ skaffold dev

Just build & tag an


image
$ skaffold build

Deploy image
$ skaffold deploy

Build, tag image and outputtemplatedmanifests


218
$ skaffold render
I
I bustomize(
I
https://github.com/kubernetes-sigs/kustomize

->
Built-in kubectt since v1.4

-> Declarative like Kubernetes (F Imperative like Helm)

-meustonize like a cake with lagers

55
toto

mixin mix-in secrets

⑰. mix-in

g
env

replica
⑤ mix-in

Base
=>

- The aim is to add


layers modifications on top of base

in order to add functionalities we want


219
->
Works like Docker or bit: each layers represents
intermediate state
an
system

-> Each YANL files are valid/usable outside of Kustomize

?
① Define a
deployment. yame file
apiVersion: apps/v1
kind: Deployment
metadata:
name: kustomize-app
labels:
app: kustomize
spec:
replicas: 1
selector:
matchLabels:
app: kustomize
template:
metadata:
labels:
app: kustomize
spec:
containers:
- name: app
image: gcr.io/foo/kustomize:latest
ports:
- containerPort: 8080
name: http
protocol: TCP
220
② Create a
file calledcustom evr.
yame
apiVersion: apps/v1
kind: Deployment
metadata:
name: kustomize-app
labels:
app: kustomize
spec:
replicas: 1
selector:
matchLabels:
app: kustomize
template:
metadata:
labels:
app: kustomize
spec:
containers:
- name: app

3
env: What we want
- name: MESSAGE_BODY
value: by Kustomize ⑲ to addto
- name: MESSAGE_FROM
value: overlay ‘custom-env’ our base

③ Create a
kustomization. yame file
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../../base 3 base our
=
deployment
patchesStrategicMerge:
- custom-env.yaml 3 patchs to
apply

⑭ Apply
$ kubectl apply -k /src/main/k8s/overlay/prod
221
I
I
I
I kustomize
Retips
① Set/Change image tag
$ kustomize edit set image
my-image=my-repo/project/my-image:tag

&
Create a secret

$ kustomize edit add secret my-secret


—-type kubernetes.io/dockerconfigjson
--from-literal=password="toto"

a br
magie
j

Le tin"e
222
generated kustomization, yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- ../../base

patchesStrategicMerge:
- custom-env.yaml
- replica.yaml

secretGenerator:
- literals:
- password=toto
name: my-secret
type: kubernetes.io/dockerconfigjson

8 Add a
prefix
and a
suffixin resource's name

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- ../../base

namePrefix: my-
nameSuffix: -v1

223
4
Create a Eustomization, yame which creates a
Config Map

Safile: from literal:

↓ E
configMapGenerator: configMapGenerator:
- name: my-configmap - name: my-configmap2
files: literals:
- application.properties - key=value


Merge several files in one Config Map
configMapGenerator:
- name: my-configmap
behavior: merge
files:
- application.properties
- secret.properties

6 Disable automatic hash suffixadded


in
Config Map and Secret generated resources name

generatorOptions:
disableNameSuffixHash: true

eroptions
generators
changebehariet

224
I
/hubescal I
I
Encrypt your
secret

into Sealed Secret

& store them in Git :


https://github.com/bitnami-labs/sealed-secrets

Create a Secret

$ kubectl create secret generic my-token


—-from-literal=my_token=‘12345’ —-dry-run=client -o yaml
-n my-namespace > my-token.yaml

Seal the Secret

$ kubeseal —-cert tls.crt —-format=yaml < my-token.yaml >


mysealedsecret.yaml

225
a new
Sealed Secret
appears

-> ⑳
L
secret
my.
sealed secret

-
-

controller
it
unseal
·I
E
9

fere

my-secret

226
I
I tring
(
Il
-sane cas,
https://github.com/aquasecurity/trivy

-> Vulnerabilities detection

W of OS
packages
* Application dependencies
Inpi, yarn, cargo, piper, composer, bundler d

-> Disconfiguration detection

·Kubernetes, Docker, Terraform od

-> Secret detection

->
Simple and Fast scanner

->
Easy integration for CI

-> Kubernetes
A operator 227
Scan an
image
$ trivy image python:3.4-alpine

Scan and Save in a JSON report


$ trivy image golang:1.13-alpine -f json -o results.json

Scan all Nodes in the default namespace


of a Kubernetes cluster

& display a
summary report
$ trivy k8s -n default --report summary

Scan & display only URGENT vulnérabilities

$ trivy k8s -n default --report all


--severity MEDIUM,HIGH,CRITICAL

228
I
I
~

1
/
I velero
"Backup & Restore Kubernetes

-
resources and
persistentvolumes
https://velero.io

Create a
full backup
$ velero backup create my-backup

Create a backup only for resources


matching with label
$ velero backup create my-light-backup
—-selector app=my-app

Restore from a backup "Istio's Gateway & Virtual Services"

$ velero restore create —-from-backup my-backup


—-include-resources
gateways.networking.istio.io,virtualservices.networking.
istio.io

Schedule backup
$ velero schedule create daily-backup
—-schedule="0 10 * * *" 229
!
->
Create a
backup before a cluster upgrade

->
Andor it's a
good practice to backup daily/weekly
your resources in sensitive clusters

->
Addrevisions feature in the backup location bucket

230
ËË
ÏËËË
ËÏ ÷Ë¥ ÷ ï÷Ë÷
https://github.com/derailed/popeye


Scan cluster and output a
report ( with a score )

→ Custom izable scan & reports

→ Several to install it
ways
( lo cally , Docker ,
in the cluster ooo )

Scan a cluster Orly in one


namespace
$ popeye —-context my-cluster -n my-ns

list
Scan
Orly a
of specified resources

$ popeye —-context my-cluster -n my-ns -s po,deploy,svc

231
Scan with a
config file
$ popeye —f spinach.yaml

Sean & Save the report ( in HTML ) in a S3 bucket


$ popeye —s3-bucket my-bucket/folder —-out html

232
I
Iaerno I
~

https://kyverno.io

-> Kubernetes native policy management


3 kind
-> of rules:

·
validate
o
generate
o mutate

webhooks?
-> Kyverno installs several

NAME WEBHOOKS
AGE
validatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-policy-validating-webhook-cfg 1
52s
validatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-resource-validating-webhook-cfg 2
52s

NAME WEBHOOKS AGE


mutatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-policy-mutating-webhook-cfg 1 52s
mutatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-resource-mutating-webhook-cfg 2 52s
mutatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-verify-mutating-webhook-cfg 1 52s

233
↳ ⑳
F
I
#

mai nte
E

instant
my.hube cluster ·
S

en
S

234
?

?
) Create a
policy that disallow deployments for Pods in

the default namespace

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-default-namespace
spec:
validationFailureAction: enforce
rules:
- name: validate-namespace
match:
resources:
kinds:
- Pod
validate:
message: "Using \"default\" namespace is not
allowed."
pattern:
metadata:
namespace: "!default"
- name: require-namespace
match:
resources:
kinds:
- Pod
validate:
message: "A namespace is required."
pattern:
metadata:
namespace: "?*"

235
par
crionraan e

) Create a
policy that creates a
Config Map in all

namespaces excepted hube.system, hube public & Ryverno


apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: zk-kafka-address
spec:
rules:
- name: zk-kafka-address
match:
resources:
kinds:
- Namespace
exclude:
resources:
namespaces:
- kube-system
- kube-public
- kyverno
generate:
synchronize: true
kind: ConfigMap
name: zk-kafka-address
# generate the resource in the new namespace
namespace: "{{request.object.metadata.name}}"
data:
kind: ConfigMap
metadata:
labels:
somekey: somevalue
data:
ZK_ADDRESS: "192.168.10.10:2181,192.168.10.11:2181"
KAFKA_ADDRESS:
"192.168.10.13:9092,192.168.10.14:9092" 236
pources
en totue
synchronie de

) Create a
policy that adds label
my awesome.app
to Pods,

Services, Config Maps & Secrets in a


given namespace

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-label
spec:
rules:
- name: add-label
match:
resources:
kinds:
- Pod
- Service
- ConfigMap
- Secret
namespaces:
- team-a
mutate:
patchStrategicMerge:
metadata:
labels:
app: my-awesome-app

) Deploy policy in the cluster

$ kubectl apply -f policy-add-label.yaml

237
) Display existing policies for all namespaces
$ kubectl get cpol -A
NAME BACKGROUND ACTION READY
add-label true audit
disallow-default-namespace true enforce true

) Display reports for all


namespaces
$ kubectl get policyreport -A

violations the cluster


I View all
policy in

$ kubectl describe polr -A | grep -i "Result: \+fail" -B10

7 Check if policies are validates

$ kyverno validate *.yaml

0
en graphical
https://github.com/Kyverno/policy-reporter

238
ËË Kubernetes

.
¥
È☐|
GENERAL

oFirst2o2J
0 own logo

CLASS

[ INGRESS
æwresouræIngÎs↳
annotation ingress class
.

↳ Define Ingress Controller


name &
configuration
parameters

Ë÷÷ï÷÷ host: *.scraly.com

239
⑰ KUBECTL DEBUG
Ma

7
/Aim emerat

I
to container near the
Pod we wantto debug

Pre. requisites active feature gate


-
Ephemeral Containers true
=

How to

I $ kubectl alpha debug -it my-pod —-image=busybox


—-target=my-pod —-container=my-debug-container

-etainer
e
Share process
a
namespace name
with a container inside the Pod

debug
processes
container can

createdby
see all

my-pod -

240
-
HORIZONTAL POD AUTOSCALER -
-

1

-i

biketo define,
scale & de
e SPA

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment

I
name: my-deploy
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
behavior:
scaleUp:
policies:
- type: Percent
value: 90
periodSeconds: 15
scaleDown:
policies:

I
# scale down 1 Pod every 10 min
- type: Pods

↳gan
value: 1
periodSeconds: 600

gant


⑳ 241
INUTABLE SECRET & CONFIGMAP

-
/
>

Le
Pre. requisites active feature gate
I
-
Immutable EphemeralVolumes:true

Allows to not edit sensitive data by mistake

How?
immutable:true

⑰ KUBECTL RUN -

/
I
-

I
Rubectl run command now create only a Pod

usage
↳mone
command - one

242
r
e
nagerdate
INUTABLE SECRETO CONFIGMAP-

Me

La
transitive
by date ristone.

I
Protect

downtime &
against accidental updates
optimize performance
Control Plane don't have to
that

because

check updates.
can cause

-
apiVersion: v1
kind: ConfigMap
metadata:
name: my-cm
immutable: true
data:
my-key: my-value

243
Ë

beËT3

°""""""|
"" " " "" " "" "
" ""
"" " ""

use a deprecated API .

$ kubectl get ingress -n my—namespace


Warning: extensions/v1beta1 Ingress is deprecated in
v1.14+, unavailable in v1.22+; use
networking.k8s.io/v1 Ingress
No resource found in my-namespace namespace

244
?⃝
-
SECCOMP -

M GA

I
I
-

SecComp (Secure Computing Rode) is a


security feature
of hernet
the Linux for limiting system calls that

app
make

o Provide sandboxing
0 Reduce the actions that a
process can
perform
d

I
Keep your Pods secure

apiVersion: v1
kind: Pod
metadata:
name: my-audit-pod

I
labels:
app: my-audit-pod
spec:

L
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/audit.json
containers:
- name: test-container
image: hashicorp/http-echo:0.2.3
args:
- "-text=just made some syscalls!"
securityContext:
allowPrivilegeEscalation: false

245
ËïËÜ
0

Kubernetes
1. 20
.

GENERAI

0 42 enhancement

"""⇐ "

[
Docker support in
%
%
Ku belet is now deprecated

But on don't Panic your Docker produced


-

246
TIMEOUT HANDLING
EXEC PROBE

/
-

I
Fixed bug in Liveness/Readiness'exec proble

I
time outs ↑

Now the field timeout Seconds is


respected.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:

-
containers:
- name: my-container
image: my-image:1.0


livenessProbe:
exec:
command:
I - cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5

247
- B
STARTUP
E

I
I

I
Hold off all the other probes until the Pod

finishes its start up

solution
A for slow starting Pods.

apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:1.0
ports:
- name: liveness-port
containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
I

net
*
failureThreshold: 1
periodSeconds: 10
startupProbe:

haie
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 30
periodSeconds: 10

248
Kuberneee

GENERAL

·a
-

ofcommunication
has
charge et

%pspraity A
povi c y e
corzant t
I-I
I
available and
fully. functional until 1.25.

https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-
present-and-future/

249
NoNSTADE
I Introduced in

Generally Available
1.4, Gronjobs

(GA).

to thenewenJobcontrentes.
are
finally
I
le
KUBECTL-

/ ↑
-

I
$ kubectl get <resource-type> <your-resource>
-o yaml

I
will no longer display managed fields
which makes reading and
parsing the output
much easier
en

250
t
" "" ""
""" " " "" " " "" "

apiVersion: v1
kind: Pod
metadata:
name: my-pod
annotations:
kubectl.kubernetes.io/default-container:
my-container-2
spec:
containers:
- name: my-container

,
image: my-image
- name: my-container-2
image: my-image-2

1
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done

$ kubectl exec my-pod -it —- sh

251
Kuberneee

GENERAL

*
-

. The release cadence has changed.


0 53 enhancements

· 3 features deprecated
o Several APIs have been removed

Migration Plugin

/ sort ↑
I-I
Lubecte helps you to migrate older

deprecated/ removed definition resources 8

$ kubectl convert -f role.yaml —-output-version


rbac.authorization.k8s.io/v1 > role-updated.yaml

252
MECHANISM
Ate
-

/
I
Kubernetes returns warning messages
when you ↑

I
use a deprecatedAPI.

$ kubectl get ingress -n my—namespace


Warning: extensions/v1beta1 Ingress is deprecated in
v1.14+, unavailable in v1.22+; use
networking.k8s.io/v1 Ingress
No resource found in my-namespace namespace

LABELSentefeedee
iBe
/
I
By defan

I
to have identified tabels.
any

Welcome to new immutable label

hubernetes, is /metadata, name that has been added

to all
↳) namespace name

namespaces.

leet
with

can beused
is selector
any

253
Kuberneee

GENERAL

-
-

. Last release of 2021

0 enhancements
47

-
POD AUTOSCALER
-INT AL

I
GA

I
autoscating/v2 stable moves to GA
API


and autoscaling/v2betal has been deprecated.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef: ⑳
apiVersion: apps/v1

I ↑
kind: Deployment
name: my-deploy
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
behavior:
scaleUp: ⑳
policies:
- type: Percent
value: 90

e periodSeconds: 15

254
⑰ KUBECTL DEBUG
a

7
/intohemera
I
the
Selainer
nea

$ kubectl alpha debug -it my-pod —-image=busybox


—-target=my-pod —-container=my-debug-container
e -
Share process
a
namespace
debug container name
with a container inside the Pod

de

↳tanecansee
a

255
TRÈS

T
Welcome to the new
field ttl Seconds After Finished

|
in Jobs that will delete the Job it has
Spec once

""""""""""""ᵗ
No need to schedule Cleanup of Jobs
any more µ
apiVersion: batch/v1
kind: Job
metadata:
name: my-job-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
metadata:
name: my-job-with-ttl
spec:
containers:
- name: busybox
image: busybox

256
Mtf
POËT -

" "" " """ "" " "" & " " "

removed 1.25

|
in .

Now the new Pool Security Admission is enabled by


Pods
default ,
and build . in , in over to prevent

fromeverhavingdangerouscapabilities.TT
"°^ÛᵐW
-

fAll clusters

at the same
now

time
support
.
IP v4 and IP v6

duae.stackcap.se?F-pE..I
www.a.yg.e.UW
To use this feature ,
Nodes must have IP v4 &

IP v6 network interfaces .
So
you
must use a

& configure Services to use both with

257
?⃝
e

labre
rease
-

of con

0 46 enhancements

renoves

pantin
po

I
C
* 0O

o
Docker supportin Kubelet is now removed

doit
. paricjou DockerPducer en
I

258
ÎᵐW

§;ÏÏÏ ÏÏ"
Aim to configure startup / Livenessl Readiness prob

apiVersion: v1
kind: Pod
metadata:
name: etcd-with-grpc
spec:
containers:
- name: etcd .

image: k8s.gcr.io/etcd:3.5.1-0

Ls
command: [ "/usr/local/bin/etcd", "--data-dir",
"/var/lib/etcd", "--listen-client-urls", "http://
0.0.0.0:2379", "--advertise-client-urls", "http://
127.0.0.1:2379", "--log-level", "debug"]
ports:
- containerPort: 2379
livenessProbe:
grpc:
port: 2379
initialDelaySeconds: 10

CLASSO.tl#BAAE-

[
|
Newfield
Y
Load Balancer Class

service that allows to


specify
for
the find

ofloadBalanceryouwant.LY

259

Welcome to a new metric in Kerbelet :

container events total that allow


-
oon
- -
you
to Count 00M ( Out of Memory ) events

that koeppen in each container .

Èᵐᵐᵐ
ÏD . Ë ËË Your
ces

260
BETA APE
. -

/ ↑

I
Alpha APIs are disabledby
default can and be

I
enabledor disabledwith feature gales.

Now Beta APIs are disabledby defaulttoo

-
TE SATOKEN - -

/ P
.

I
You can now create additional tokens
API

I
for a Service Account (SA).

apiVersion: v1
kind: Secret
metadata:
name: my-sa-secret
annotations:
kubernetes.io/service-account.name: mysa

-
type: kubernetes.io/service-account-token
---

- I
apiVersion: v1
kind: ServiceAccount
metadata:
I name: read-only

n↑ namespace: default
secrets: my-sa-secret

261
Kubernee

GENERAL

se
and noveltes

Renoves
soit
policy

PSP have been removed because of its

↳PaniniI
but don't replacement
I
complexity panic, a

tamission.

https://kubernetes.io/docs/tasks/configure-pod-
container/migrate-from-psp/

262
M
Alpha

⑪hefach vanssone*
e

I
7
e
vuenerabilities due to excessive privileges
to Pod.
given a

Welcome to a
long awaited feature8
supportin Kubernetes B
user
namespaces

I
⑲1
I Pre.
-
requisites active feature gate
UserNamespacesSupport true
:

↳ici
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
hostUsers: false
containers:
- name: my-container
image: my-image

b
263
POD SECURITY

/ Atte
-

I The new Pod Security Admission is

et
↳berparober ange

Ma
CONTAINER
~CHECKPOINTING

/
- >

I
This feature allows to taking a

snapshot of a
running container,that
I

inte
can be transferred to another Node

for analysis needs.

-
S

I
-

!I I
D

I
264
RETRIABLE

---M
JOBS

I
> i

Thanks to this feature you can

i
determine if a Job should be retried

1caoface
en
KUBECTL DEBUG
Atte
⑳im
-
toemerat

I
container near the
Pod we wantto debug

$ kubectl debug -it my-pod —-image=busybox


—-target=my-pod —-container=my-debug-container
e
Share process
a
namespace
debug container name
with a container inside the Pod

rare
1 d

265
e

Ma
-
iS RUPTiON BUDGET
Po * -

P
& HEALTHY PODS


PDB
only take in account Running Pods but
I

I
a Pod can be Running andnot
Ready.
This new
feature allow to prevent an eviction.

Pre. requisites active feature gate


I -

Itrue, thyPodErictionPolicy=

266
--
PROVisioN FROM VoLUNE SNAPSHOT

/
ACROSS NANESPACES

I
-

Ain to create a
Persistent Volume Claim

from a Volume Snapshot accross

namespaces.

inri
I .....
i Post

'


Namespace 1

I
i

in
----

L
snapshot

Namespace 2

267
M
-
BALANCER SERVICE -

/
Dr

I
Aim to create a service
of type equals ↑
to load Balancer that has different port I

I
definitions with different protocols.
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx

!
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: tcp
port: 5555
targetPort: 5555
protocol: TCP
- name: udp
port: 5556
targetPort: 5556
protocol: UDP
args:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

268
PRIVMMGA
WINDOWS

F Host proces containurs the Windors i

,
ivolen toline prilegrd cdaies 1:

nupentD .
I
PEROVAL T
F
'!AUIHAL
catcinod ic antsiber nutiom for kubelets
,
netmancoprn
Cn in
6
269
Kuberneee

renrich "chiesribeset

MOVING
REGISTRY

↳sinonI
entoam entBenStres enteintraetiBnoasnes e.
closed to
you.
No matter Cloudprovider
you
use

270
⑰GROUP
SNAPSHOT
Ma
- >

/
I
mase -one-aramil
Aim to create snapshots
I
across

all the volumes of a Pod.

Snapshot of volumes allow


a
group
to prevent data lossi.

!
Pre. requisites active flag
->

I I

↳Dearn
encresentet
CREATE_DELETE_GET_VOLUME_GROUP_SNAPSHOT

271
le
Access for pr
CONCEPOD

I
Beta

I
Allow to restrict volume access
I

I
to a
single Pod.

Pre. requisites active feature gate


-

-
ReadWriteOnzePod true
=

hist
How to

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOncePod
resources:
requests:
storage: 1Gi

⑭-
272
eRioD
CoNFicuRATion
I
I
The
.
ROBES

termination Grace Period Seconds field


II

I
new

I
in the liveness Probe ask to Kubelet

to wait
before to kill the container.

How to

1-
apiVersion: v1
kind: Pod

spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 60
periodSeconds: 20
terminationGracePeriodSeconds: 60

273
e
ORDINAL FOR
⑰START

I
I Now can
-

define
STATEULSET
i

the start ordinal for I

I
you I

I
a
Statefr2 Set .

Useful in case of migration to another cluster.

I
Pre. requisites active
feature gate
-

-
StatefulSetStart Ordinal true

I
=

How to

$ kubectl patch sts my-sts


-p '{"spec": {"ordinals": {"start": 5}}}'

"

274
Dotatione

se 1.20

.ws


o
-> Docker supportin Kubelet is now deprecated
& will be removedin a
future release

if all

i
·08

si Runtimel-
-> Images still can
be/britted locally with
Loche
& through &
2I/CD pulled by Kubernetes 275
iDm%fE.pro/uces0CI(0penGntainerInitiativeJJ
→ Huber relies Will use container d as CRI runtime

PreviousarchitectureJ.'Hk
LI support

hub Mᵈ* "


ËËË naan
ÏÔ
DocherUI@Ztw.containerd

Problem se
ÏüÎ

-

gg Docker includes so
many components ( networking ,

volumes UX enhancement )
, o»

-4Fr security risks 276


?⃝
Kubelet
*
only understands CRI runtime

* Need to maintain dockershim bridge

erarchi
let containerd

-> containerd is an option for this container runtime

but also CRI-0.

rtine
i s srsi b l e for peet

277
Glossary

> C5 - CronJob

> CM - Config Map


> CNI - Container Network Interface
> CRI ⑤ Container Runtime Interface
> HPA 8 Horizontal Pod Autoscaler

> NP 8 Network Policy


> NS 8 Namespace
> OCI 8 Open Container Initiative
> PDB 8 Podisruption Budget
> PSP 8 Pod Security Policy

> PV 8 Persistent Volume

> PVC 8 Persistent Volume Claim

> &S ⑤ Quality of Service

> RBAC 8 Role-Based Access Control


> SA 8 Service Account

278
me
Collection:

I:I
-
Avilable
d on
Paperback on Amazon

avera
-

Youtube s
-

https://www.youtube.com/AurelieVache
279
CEI?
Dev Rel GᵐÊ Dev Ops + 16
years

xp
ÆŒÆ→ Ê •*
☒ as Google Developer Expert in Cloud technologies
#ïÏ¥ÆÏËËËËË!ÏËÏ¥ËÊÏËËÏËÎËÏ
IË¥ÇËÆ
CNCF Ambassador !Ïü☒ÆË Docker
Æ☒¥¥r¥Ï
The:-#Eh

Ë⇐⇐¥ËŒ¥Y Captain
Duchess France / Oomen in tech association

Technica writer -
Speaker -
Sketch noter #Ï

Contact ne !•
Aurélie Vache -

:÷§ÆËÏ @ aurelie vache

Abstract
Understand ing Knbernetes can be
difficult or time .

consuming .

I created this collection


of sketch notes about Kubernetes

in arder to
Ery to explain the technology in a Visual
way .

I¥d :
Kubernetes Components
Resources

Concrete example
Tips & Tools

You might also like