Professional Documents
Culture Documents
÷
l
.
ÎIÏË ÏÎÏËÈË ÏÏËËÏËIÏËÏÎË
":!¥ : Ï ÷: ¥:
: :*
Understand in g
in a
¥
Mrad
visual
:*
Kubernetes
way
⇐*☒
Aurélie Vache
•
Understand in g Kubernetes
in Visual
a
way
Aurélie Vache
Seal
thanks to o
Reviewer
-
Gagelog:
quest
Release Date 8 31/05/2020
in
29/04/2023
Changelog Release
1.27
Length 8 279pages
Kubernetes 8 1.27
Licence?
-
Kubernetes components ⑦
/
Kubeconfig file ⑲
E
Kubecte tips ⑭
/
Namespace ⑲
I
Resource Quota ⑳4
4
⑩.
Pod] ⑯
2
⑱
Lifecycle E
Defetion ⑬
I
Liveness & Readiness probes ⑬4
/
Startup probe ⑪
2
⑱.
CronJob ⑰
4
Config Map ⑦ 3
8Secret ⑰
S
Deployment) ⑰
I
Rolling Update ⑧
S
Pull
images configuration ⑩
I
Replicasel ⑲4
3
DaemonSet 95 I
Service ⑩E
Ingress ⑮
↳
LimitRange ⑬
Resource's
requests & Limits No
Pod Disruption Budget (PDB) ⑲
I
Network Policies ⑮
4
RBAC ⑭
Pod Security Policy (PSP) ⑰ L
Lifecycle ⑮
Node
operations ⑮
Debugging/Troubleshooting ⑲
Kubecle convert ⑳
Tools) ⑪
Kubeckx ⑪
Rubens ⑫e
.
Stern ⑬ ↑
. Krew ⑭
⑳- 195 ⑳2
1435
⑰
Shaffold ⑱
Kustomize) ⑲e
Tips ⑳
8Kubeseal ⑫es
-Trivy ⑰i
Velero
·&
⑳
Popeye ⑬B
·
5
Kyverno ⑬
3
Changes) ⑬
E
Kubernetes 1.18 ⑬
Kubernetes 1.19 ⑭
Kubernetes 1.20 ⑪
Kubernetes 1.21 ⑭9
Kubernetes 1.22 ⑮
Kubernetes 1.23 ⑮
Kubernetes 1.24 ⑮
Kubernetes 1.25 ⑯
Kubernetes 1.26
⑯i
Kubernetes 1.27
⑰
*
Docker
S
deprecation ⑰5
Glossary ⑰
In the collection
same
↳
6
Ë¥ËËË
É¥API-
Ï ÏËËÏËÏ I
"
lˢcheÏ^
"☒
☒☒ *☒
Node
7
Es
→ Distrib uted hey - value database
→☒
☒
s etcd : single of
API-Î{
Î""""
→ Exposes Kubernetes API
8
Schechter
→ Responsible for find ing the best Node for nearly
created Pods
:
-
serrer No problem ,
÷:: [[
my mission is to find a
I want a
sui table Node for
this new Pod
www.Man-ager/y
-
to make arder to
→ Responsible changes in more the
9
→ Runs several separate controller processes :
Tots
→ Watches Nodes & do action
fReplicat
→ en sures the desiree number of Pods are
flairer/
pop
ulates the end point Objects
( responsible for Services & Pods connections )
nfservicertcount/
◦
&
→
ÏË
→ create default accounts and
are
healthy
→ Can create and delete Pods
|☒ù|
**
Oh, oh ! Ineed
A Pod
failed ! | one
!
Ï÷-
Prox
→ Runs on each Nodes
Ë¥Ë in
my .
ns
namespace
bui
Ok
,
which cluster
¥9
→ Kubectl hnows where ( in which cluster )
→ Kube config
files are structured in YANL files
ü 12
Now herbe config files are loaded ?
-
① -
-
herbeconfig flag , if specified
, by default
Youcan.tjustap@endJYAnLbewnfigfilesinonehubeconfigfile.i
i
Tips
Etr
'
It to files
s
possible specify several in
$ export KUBECONFIG=file1:file2:file3
13
" Answer
8
TKubecte
iKuetips
'
supports one minor
mmeu
version
logic
IIs Kubecre
Avery
=
14
?
<
.Ltaay
"
M' deploy
"
to 5 repli cas
?
¥
,
ttaU
that have the status ! =
"
Running
"
✓ ?
$ kubectl delete po
—-field-selector=status.phase!=‘Running’ ↓! 15
?
t'es ordered by status
Iwantthe@stf1Namespaces.onlytheir name
4¥
?
$ kubectl get ns
-o custom-columns="NAME":".metadata.name" —-no-headers
?
.ᵗt in
my
- service Service
16
This option aims to bisten for changes to a particular object
Example :
$ kubectl get pod —w
Example :
$ kubectl get pod my-pod -o wide
Example :
$ kubectl get secret
-l sealedsecrets.bitnami.com/sealed-secrets-key -o name
17
cc
This option allows to delete
only resource
Ee :
$ kubectl delete job my-job cascade=false -n my-namespace
18
Ë:¥Eïf
•
°
per project
→ Way of isolation
§ > per
per
team
family of component
| | |
⑤ï* ⊕ï*
"*
.*
.
*
.
⊕æ* **
**
"*
-- meme
| ÏïEü
- - SVC
. ma
-
deploy / my.de p*Ëy
-
nsl 2
my -
my
- ns
19
Each
- resource can
appear in
only one
namespace
A If a
namespace is deleted all
resources inside
,
.
are deleted roo
¤ projectA
Necke pr
a"
,rsmoo.
,
.. A Pod in names
pace A
can read
't
a secret from namespace B
ol 20
Special namespace :
hnbe public
-
→ reserved mainly for cluster use
→ s object
'
HewtbeatË
oupdapes.gv.desw.SU
ability
determine the avait of a Noode .
2
forms of heartbeats :
◦ lease Object
21
ËjHuÏ
Il-7
} Switch to my _ ns
namespace
$ kubens my-ns
Of
are
namespace
$ kubectl api-resources —-namespaced=true
22
"
}
"
Delete a
namespace
stucked in Termina ting state
Ëa please help me µ
'
ter-inat.no#y.stucked-nsKubernetesversion(y
bb
-
-
Works eren in Old
23
EËËËË
0
o v
°
•
T-faqvotaise%6E-hemryiy.ae
need to
specify request & limit for Pods
24
Ëjw_
Il
25
ËË÷ Pod
0
Ë÷µ÷☒±⇐*÷ç *⇐ ÷☒
Smallest unit
Can contain
several containers
☒
One IP address
per Pod
26
?⃝
ftp.wto?--Y
Il
namespace
&
'
} Copy a
file stored locally to a Pod
27
ËÏ¥Ü
Pod
*.
¥1
- e-
Œï¥ {←ŒŒÈ
"
28
-
Pendi
Container images have not yet been created .
-
Runni
Bound to a Node .
All containers are created .
At least one is
running .
-
Svcceed
All containers are terminated with success .
-
rail
At least one container is in fàilure ( exited with
non -
zero code or terminale d by System ) .
29
status Pool
of could not be obtained .
?
Why
7
30
ËÏ¥Ë
Pod
( with a new
configuration or code for ex no
)
ŒÂ {
→ The délétion can be manual
Cy forced
31
ËI↳Ï→
}
PodgracefulyIwaitforwnfigurati@8omknbe tn-P.oQhubEteln
Delete a
"=
-
:ï:ï÷÷÷Ï÷ï
÷
I removed it &
deare
Pool name
*☒ *☒ *☒
32
?⃝
} Delete a Pod instant ly ( nanually forced)
whenyouforcGPodfdelet.name
is automatically freed from the API Server
33
Ë:¥Ï &
Readiness
| |
Live ness & Readiness
probes should be configured
-
1AM .nl .
ludwika
Areyouae.ve?I@
c
ÜÏ live Probe : Ok
Knb④
ness
→
live ness Probe : OK
34
et livenessProbeiNok
t
⑩
restart a container
before to
perform the first proble
wait
before the deletion
of the container
35
->
Different types of Liveness probles exist:
·
Define a Liveness probe that sends a HTTP
request
on container's server
on
port 8080
on / healthz
et eprobe
If CTTPcode) 2000
=
and [400
apiVersion: v1
kind: Pod
…
spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 60
periodSeconds: 20
terminationGracePeriodSeconds: 60
36
Define a Liveness probe that execute the Command
knb
If return code = = 0
apiVersion: v1
kind: Pod
…
spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
37
·
Liveness which connects to
Define a
proble port 8080
apiVersion: v1
kind: Pod
…
spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
·
Define Liveness probe that
query gRPC service
a
apiVersion: v1
kind: Pod
…
spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
grpc:
port: 9090
service: my-service
initialDelaySeconds: 8
periodSeconds: 12
38
⑳iness
eL
ready?
↳et ⑩
Serice
39
⑭let Treadiness
Probe:Nok
⑩ s
*
Serice
Kubelet to know when
-> uses Readiness probes a
container is
ready to start accepting traffic
nor, Kubelet the link
If can remove
to Liveness Probe
40
Startup proble
↳
-> Hold off all the other probes until the Pod
terre
lienset
↳let startupProbeiOk
-starting poss
41
7 Don't test for liveness until GTTP end point
is available
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:1.0
ports:
- name: liveness-port
containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 1
periodSeconds: 10
3
startupProbe: the app have
httpGet:
path: /healthz
port: liveness-port
min
5 (30 x 10s)
failureThreshold: 30
to
periodSeconds: 10 finish its startup
42
IËïÏË
0
"
K
events
u v
.
? ?
r.
O
¥
43
tart
⑪
Tr
②
· 44
80
il e
45
2.
ftp./HowTo?--Y
) Create a
Deployment with 3 repli cas .
proxy
(
envoy )
is
ready
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-image:latest
lifecycle:
postStart:
httpGet:
path: /healthz/ready
port: 15020
46
Ë
Create a
Deployment with a container that
…
spec:
containers:
- name: my-app
image: my-image:latest
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello
Kubernetes lovers"]
47
to goodbye
Sag
Wait,
you reed
to
d)
ËÏH◦Ï→
) Create a
Deployment that hills gracefully
app bef.ie the Pod terminates
my
-
…
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "my-app -kill;
do sleep 1; done"]
48
Executecommandet
↳
->
Eubectt exec command allows to run commands inside
you
a Pod in a container
tige
$hubectsexec my-pod-e my-container
m a
I i n t e
-it --sh
19. ->
$ ls -l
$ cat path/to/a/file
?
?
7 Connectto a
specific container & open an interactive shell
> ls -l
> tail -f /var/log/debug.log 49
-
ii
'
*
specify a default container
:
y
apiVersion: v1
kind: Pod
metadata:
name: my-pod
annotations:
kubectl.kubernetes.io/default-container: my-container-2
spec:
containers:
- name: my-container
image: my-image
- name: my-container-2
image: my-image-2
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done
( no need to specify -
c
option thanks to the annotation )
50
贕f
0
o v
.
→ In it containers run
before app
containers in a Pod
☒
⑥
ËËËËÏÏËÏ
&oneormoreinitconLainers÷
A Pod can have one or more containers
51
gIruneachinit-@ainerssequenc.ial y
d'S
& then containers
Usual
ÏüÉË -
① container .
¥Œ ✓
② ]ËË
| Init container 2
K¥7 ✓
ÈË¥Ë¥Ë Ë ËÏËÈ¥I
☐fÏËËËËË ÎFµËËÏË containers
r r r
-
¥
✓
F-achinitcontainersneeda-tartsuesfdybeforeexecut.mg
the next one .
52
Z If restart policy = Neve
.
ö
I
Kubelet
21
l
@ BInit containere
readyI
G
o
II vot
Ready X
it
until
succeeds
O
ötpi
tritiuls 53
Z If restart policy Nere
.
=
:
Ö
Il
Kubelet
l
@ ÁY Init container
1
readyI
@ Áo Init container à
I vot
Ready X
OX
Bİi
ödfailed 54
→ Useful for :
D Init DB schémas
} Set up
permissions QQ
55
?
?
) Create a Pod with
my.app
container & an init container
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: my-app:1.0
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: check-service
image: busybox
command: ['sh', '-c', "until nslookup my-svc.$(cat /var/run/
secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local;
do echo waiting for my-svc; sleep 5; done"]
56
I Create a Pod with
:
o an init container that execute tos
Git clone a
lusiI share I l
aginx htme folder
o a
ngninx container
OTDI
o a sharedvolume
YmÜöps
URINGNTLLIONKEMGT
my Website M
-git
,
apiVersion: v1
kind: Pod
metadata:
name: my-website
spec:
initContainers:
- name: clone-repo
image: alpine/git
command:
- git
- clone
- --progress
- https://github.com/scraly/my-website.git
- /usr/share/nginx/html
volumeMounts:
- name: website-content
mountPath: "/usr/share/nginx/html"
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- name: website-content
mountPath: "/usr/share/nginx/html"
volumes:
- name: website-content 57
emptyDir: {}
abonne
·nike
et
Pod
A is mortal, so
everytime a new one is
,
commit ↑
repository
Developers
⑮
clone the
⑳ pull (Nginx) *
Cocher image 58
Job
↳
?
as-
) A
process that runs a certain time to completion;
batch
*
process
* backup
database
*
migration/clean
⑯
59
When the Pod launched
. by a Job is
finished,
its status will be completed.
sequent
ane: i n e e
Fat
X
0
:arte
foncti
S
a
60
ËjHouÏ
-7Il
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
}
backoffLimit: 5
activeDeadlineSeconds: 120 Optimal
completions: 1
parallelism: 1
template:
spec:
containers:
- name: busybox
image: busybox
restartPolicy: onFailure ) Required
61
) that will be deleted
Create a Job
automatically
after the time defined
template:
metadata:
name: my-job-with-ttl
spec:
containers:
- name: busybox
image: busybox
) Delete a Job
only
$ kubectl delete job my-job cascade=false
62
?
¥
7-restart.li
starts Pod
If the Pool failed ,
Job controller a new .
o-OnFai@eIfthePodfailedlforanyreasonJ.thePodstays
on the No de but containers will re -
run .
manea
container
63
⑪
- restart
and
Policy
cannot be setto
is
appliedto a
Always.
Pod (not a Job)
I
.
reine"Doa. (
flimit
Number
of retries before considering a Job as failed.
By defaultbackoff limitis equal to 6.
seconds
dire
Once a Job reaches the number of active Deadline Seconds,
⑯-
status: Deadline Exceeded
64
re
lebachofflimit is not
yetreached
etions
By default completions is
equal to 1, which means the
terminates
successfully.
to 5for
If we set completions equals ex., Job controller
elism
parallelism allows
you
to run multiple Pods at the
same time.
65
S
riousis
definede
re
66
CronJobs.
↳
-
CronJob
La
-> Creates Job on a time-basedschedule
Ze
·000
⑳
-> Or periodically
·000
Elietowake up
-
⑳ 67
Fat
X
-> Basedon Iron
format a
?
) Create a CronJob from busybox image (Declarative way)
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
valid time zone
timeZone: "Europe/Paris" 3 a
jobTemplate:
spec:
template:
spec: 3 number of failed Jobs
failedHistoryLimit: 3 3 number of successful Job
successfulJobHistoryLimit: 1
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello readers
68
restartPolicy: onFailure
jerespect
e
S
recensé à
secondee
69
Ë:Ü
0
÷ .
☒
:
Ô
¥ {putnon-sensitivedatainlonfigrlapinstead.tl/
-
Don't hardcode configuration data in
app ,
to
→ Allows deploy a same
app indifferent environments
|
*
.
¥
.
*
a
☒ my
-
Cn -
prod
•
☒
my - Cn -
stag
-
staging production
70
3 to create
-> ways a
Config Map
-- -
from key andvalue
I from an file
en
from a
file
71
Parto?
) Create a
Config Map from hey and value
) Create a
Config Map from a file
$ kubectl create cm my-cm-1 —-from-file=my-file.txt
-n my-namespace
) Create a
Config Map from an en
file
$ kubectl create cm my-cm-1 —-from-env-file=my-envfile.txt
-n my-namespace
my -
apiVersion: v1
kind: Pod
metadata:
name: my-pod-2
spec:
containers:
- name: my-container
image: busybox
env:
- name: MY-ENV-KEY
valueFrom:
configMapKeyRef:
name: my-cm
key: my-key
73
Secrets.
↳
->
Save sensitive data in secrets
↳predI I encoded
0H
S
encoded
*
⑱8 automatically
decoded
when attached
to a Pod
74
-> 3 to create Secret:
ways a
-- -
from key andvalue
I from an file
en
from a
file
generic
75
ËIH◦Ï→
} Create a Secret from hey and value
& decode it
76
Deployment
↳
-> Deployment handles Pod creation
Deployment history
78
?
?
) Create a
Deployment that creates one Pod with
nginx
) Create a
Deployment that creates 3 replicas of Pod
image) Declarative
way)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: my-deploy
template:
metadata:
labels:
app: my-deploy
spec:
containers:
- name: nginx
image: nginx
79
Deployment
☒
→
÷¥ ¥ ˵¥ Ë÷ÊË
ËË¥ËË
} Seale a
Deployment to 5 repli cas
80
ËÏ¥ËüË Deployment
Deployment update
→ Increment
ally update instances by
creating new one
reverted to a
previous version
81
image: busybox image: busybox:1.36.0
I
⑳ ⑳
↑ ↑
mydeploy mydeploy
↑ Create
deployment ↑ Update
image
s s
⑪ ⑳
82
image: busybox:1.36.0
⑳M ⑧ ⑧
&a
new po
is created
mydeploy
i
83
image: busybox:1.36.0
⑳m ⑳
↑
The Pod is removed
mydeploy
i
84
image: busybox: 1.36.0
⑩
-
mydeploy
↑
s thing for
Same
85
⑳N
a -
mydeploy
↑
s
Rollback to
and revision
86
?
?
⑪ Create a
Deployment in a
file my.
named
deploy yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: busybox
spec:
replicas: 3
selector:
matchLabels:
app: busybox
template: *
metadata:
labels:
app: my-deploy
spec:
containers:
- name: busybox
image: busybox
E
100% strategy:
of Pods will be
rollingUpdate:
created & then old Pods maxSurge: 100%
maxUnavailable: 0%
will be deleted type: RollingUpdate
87
③ Update deployment container's image name/tag
$ kubectl set image deploy my-deploy
busybox=busybox:1.36.0 —-record
④ show
deployment rolling update logs
$ kubectl rollout status deploy my-deploy
⑤
show history
of deployment creation/ updates
⑦
Rollback to previous deployment
$ kubectl rollout undo deploy my-deploy
88
0
:ralso
S
) Restart a
deployment / kill existing Pods and
) Pause then
and resume the deployment of the resource
89
Lii
Pull
inages
configuration
,
need
configurations
several
W Pod W Replicaset
W
U
Deployment Daemonset
O
t CronJob statefulset
W Job
000
Every resource
type including container
spec
90
En
-
policy
refigue
I
et
Welet
…
containers:
- name: my-container
image: my-registry.com/my-app:tag
imagePullPolicy: Always
E -Present
(default value):
Kubelet doesn't
pull image from registry
if image already exists in the Node
E Always:
-
E Nevers
-
-
ImagePull
-
Secrets
Mikaürie et
92
Parto?
…
containers:
- name: my-container
image: my-registry.com/my-app:tag
imagePullPolicy: Always
imagePullSecrets:
- name: registry-secret
7 Kubernetes uses
docker-registry secret type
for image registry authentication
93
Replicasel
↳
-> ReplicaSet creates Pods according to the
.
loyment -> ⑱ficabet -
↳
-> ReplicaSet can be createdmanually
or
automatically by a Deployment
forreverofReeeee
et 94
Caemonsel
↳
-> Ensures that all eligibles Noces run a
⑱
↓ d d
n -
echecerait
but Daemon Set
manage theses Poals
controller will handle them instead.
95
derategies
·LegUpdate
Update strategy by default.
Old hilled
Pods will be
automatically new
and ones
·Leete
Existing Pods need to be manually deletedin order
96
Parto?
*
Create a DaemonSet that creates one
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
selector:
matchLabels:
name: my-app
template:
metadata:
labels:
name: my-app
Specify DaemonSet
3
spec:
tolerations: will run on
- key: node-role.kubernetes.io/master
effect: NoSchedule Master Nodes
containers:
- name: busybox
image: busybox
97
② Create a Daemon Set that create Pods
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
selector:
matchLabels:
name: my-app
template:
metadata:
labels:
name: my-app
spec:
nodeSelector:
my-key: my-value
containers:
- name: busybox
image: busybox
98
?
⑭tice
probes
-
. Pod is stuck
Pending" state
.
in
99
:
Quick solutions
☒
ces
→
List all the Nodes in the cluster
→
Compare the lists in arder to find Node in trouble .
100
tËï "
u
K
v
Service
Assigns to
→ a
group of Pods a
unique DNS name
☒
(t -
↓ •Ï
123
my auesome app
- - -
s →
- •☒☒
/
Ë
456
my auesome app
-
-
\,
-
⊕
-789
my auesome app
- -
101
?⃝
-> The set of Pods targeted by a
Service is
usually
determined by a selector
E selector
I
Le
cluster local tape
my-awesome -appony-ns.svc.
↓ *
my-acesome app-123
-
device -
fr y-app
S
⑳32.4 L,
my-aresome-app.456
⑳-friay-app
my-acesome app.789
-
102
-> Several kind of Services:
I
o
O
? S -O
Port
.
·renaswane
o
↳Balancer
103
Types
o
I ⑳by
-> Exposes the service on a cluster internal IP
⑳M
spec.clusterIPS8 (Spec.ports [0]port
--
$cure
↳
frice
-
-
1
Ityachable cluster
Iy
104
rt
{Æ=P)8Çpec.ports[o].nodePo§
Reachableoutsideoftheclusterbyrequestingf
↓
'
105
↳Balancer
o
-> Assigns a
fixedexternal IP address
:
--
"ra
I
et
Le
106
o
ErnalName
-> Provides an internal alias for an external (Ns name
⑳- M
↓
device recom
107
ì
Headless
o
nepet Si
- ?
2.
t /
service
discovery mechanism
-> Cluster IP
A is not allocated
⑪
*
eter t
equal
Noe
I de
108
?
?
) Create a Service that
exposes a
Deployment
on port 60
immutable
it
before
oll
S
109
Labels Gelactors.
↳
els
-> Key/value pairs attached to an
object
%
Jeremy-app
->
Several objects/resources can have the
same label
my-pod-1 my-pod. 2
⑳- M
⑳- M
:re % ↑
Jeremy-app ·fremy-app
⑪ ↑
io/281.
areI
reserved for
prefixes
Kubernetes
in
core
label keys
components
110
ËI↳Ï→
> Create a
Pod with two labels
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
version: v1
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
111
-electors
-> Selectors use labels to filter
or selectobjects/resources
o
labels
exact match. =, = !=
=
o
ì match
Expressions
112
Parto?
I Show labels for all of my
Pods
) Create a
Deployment that will manage Pods
label app:my-app
apiVersion: v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: my-app
Lequality
spec:
selector: based
matchLabels:
app: my-app
113
) Create a
Deployment that manage Pods that
have label version value equals to un or v2
apiVersion: v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
matchExpressions:
- {key: version, operator: In, values: ["v1","v2"]} 3 sect
List Pods that have labels &
) app: my.app
version:22
& v1,
version= v2 or v3
114
Ingress
↳
-> Allows access to
your
services from outside
the cluster
S ·
Ingress
-
win
↓
⑳
d
pilvelwesomeapp
⑳
d
t
⑳
d
115
-> An Ingress is implementedby a
3rd party:
Controller
an
Ingress
↳ extends specs to
support additional
features
->
Consolidates several routes in a
single resource
116
?
?
) Create a
Nginx Ingress which
based
defines two routes :
/my-api/ull to Service
myapi-in
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules: no
If
host is specified,
- host: scraly.com
http:
3
rules are appliedto
paths: inbound HTTP
- path: /my-api/v1
all
traffic
pathType: Prefix
backend:
service:
name: my-api-v1
port:
number: 80
- path: /my-api/v2
pathType: Prefix
backend:
service:
name: my-api-v2
port:
number: 80
117
ouleClass(
In order to should be handled
specify which Ingress
by controller
…
metadata:
name: my-gce-ingress
annotations:
kubernetes.io/ingress.class: gce
with
Exact: Matches the URL case
sensitivity
/my-api/vll matches/my-apilul
/my-api (vl/toto
118
) Create an
Ingress secured through TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-with-tls
spec:
tls:
- hosts:
- scraly.com
secretName: secret-tls
rules:
- host: scraly.com
…
119
Reut
Ietalementation
Ingressisstill
in
aren't
homogeneous
1l
-
paratif
E ?
site approxy,
Iintodruce
told of
name
Contour's name 8
Ingress Route
rz2 fil
*
Idiegothrcho
I
pareI
I'm have understood it,
sure
you
120
Ë¥i &
Storage Class
In short :
-
→
Pods are mortal
by default
& Nodes Will not live forever to
À
-
Et
To store data Persistent Volume
→ permanent ly ,
use
121
Static ( with out Storage Class )
t
provisioning :
- - - - - - -
- - -
.
-
-
!
,
- ☐ - '
i volume (
^
f.
I Poet l
'
- - - - - - -
-
BÏ? -1
Namespace
¥
E
storage
122
Dynamic
provisioning
Vênê
l
-s
o PVC i
í
i
(
Poct
l i n ke a --
-
,
-
-
- -
-
-
-
-
-
I
Namespace to
dynamically
creares
v -
I
storageclasss
¿
E
It
storage
123
Persistent
Volume
-
- Providealifetine
storage location that has
a
s
in Node depend
ut any or
of
Pod
PV not sticked
s is in a
namespace
:
s
NFS cloud provider specific storage system
000
,
- After creation PV has a STATus of Available
.
s
,
Means not bound to prC
yet a
.
2 sorts
-
of provisioning :
s
Ftatic T
dynamic
mme ane
X
storageclass Name storageClass Name
124
ouEeModes(
↳ Rwo read-write by a
single Noce
·sc
read-write by a
single Pod.
⑭- ici or write to it
125
auleiePolicy(
Retain 8 Appropriate for precious data.
126
?
?
) Create a PV, with NFS
type, linked to an AWS EFS
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /
server: fs-xxx.efs.eu-central-1.amazon.com
readOnly: false
127
terclass
-> In order to set-up dynamic provisioning
estW
GcEPersistentDisk AzureDisk AWSEBS oo
volume &
binding dynamic provision will
?
O
?
) Create a
StorageClass disks
with SSD
for
Google Compute Engine (GCE)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: faster
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
volumeBindingMode: Immediate
128
volaim
estent
-> Clair/Request a suitable Persistent Volume
?
O
?
3 Create a PVC that requests a
storage with
in Pod
a
?
O
?
3 Mount PVC as a volume in a Pod
with read
only access
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: "/data"
name: nfs-storage
readOnly: true
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: my-pvc
130
Ë¥ËËËï Pod
on observe d CPU
- - -
131
se
autoscaling
luzbetal
⑳ ⑳
Id|
⑳
132
-> Available for ReplicaSet, Deployment
&
StatefulSet
⑳ficaset
- scales
OR
⑭
&AS loyment
a
↳ ⑱freset OR
inte
ecannois scenem
133
Parto?
) Autoscate/create a
for
HPA a
deployment
that maintains an
average
CPU
usage
accross all Pods of 80%
134
Ifeatures
) Create a HPAwith behavior;
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deploy
minReplicas: 3
maxReplicas: 10 1.18
targetCPUUtilizationPercentage: 80
behavior:
scaleUp:
↑
policies:
- type: Percent
value: 90 - -
periodSeconds: 15
scaleDown:
w
policies:
# scale down 1 Pod every 10 min
- type: Pods
value: 1
periodSeconds: 600
135
L贕j
Limit
Ë⑨ :
Ë
JDefaultmemoryrequestlrli-nitvalues.GL
] for containers in a
namespace
ËÏH◦Ï→ :
> Create a
Limit
Range
apiVersion: v1
kind: LimitRange
metadata:
name: memory-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
136
?
/rios
① Pod creation with one container
without & limit
memory request
⑳
need
toe
mm
W
4
iapiVersion: v1
limitRange esten
kind: Pod
metadata:
name: my-pod
spec:
in
containers: I
append configuration with
S
your
- name: my-container
image: nginx
prene
3 default values, now I can
schedule Node
L
in
resources: you a
requests:
memory: 256Mi I
limit:
memory: 512Mi ⑮
imitangeer
It
137
Pod creation with
⑳ one container
with
memory limitonly
⑳
need
toe
-merce
apiVersion: v1 Ok,
et
kind: Pod
metadata:
E name: my-pod
spec:
to
containers: request equals
- name: my-container
image: nginx
resources:
S
I
Grande e
limit:
memory: 512Mi
chemineme
en
requests:
memory: 256Mi
138
Pod creation with
③ one container
with
memory request only
⑳reedt e
↳Summermanente
apiVersion: v1
kind: Pod
metadata:
name: my-pod
et
spec: limitequals to the one
containers:
- name: my-container
image: nginx
resources:
-
I requests: E
En esDemit
memory: 100Mi
chemineme
en
limit:
Rangeler
memory: 512Mi
bb
139
Lwis
Resorrce
's
Requests I linits
, rids
is to wch
ther
vevery
,
00 Kille destroy
can
r
EXrę
-TER-MI-NATE
l
^
n7
@
w
w
~
I l
w
GLIL IIÜII
2"
dm
Use muchs
too
resources
ö X
oor kieled
140
-> An OOM (Out of Memory) Killer runs on each Node
-il nya
Node 1 Node2 Node 3
-m3resources:
requests:
memory: 64Mi
cpu: 250m
limit:
memory: 128Mi
↳remmene
cpu: 500m
141
→ Scheduler uses these information to decide
Mme
}
resources:
requests:
memory: 64Mi
ni cpu: 250m
limit:
memory: 128Mi
%¥Î÷: §ᵐᵐ
ï:Ë¥ï÷¥Ë
cpu: 500m
ËÏÏÏÏÏÏ÷Ï
ÏÏÏÏ
Nodet Node 2
142
the
-difference
What is
oe
between
2. La
I
REQUEST
Imemneeded
scheduled,
-
for Pod to be
>
b
LiMiT
I
-
--
143
2.
?
difference
s
si
CPU is a compressible resource.
keep running,
-......................
·
Remory is a none compressible resource.
145
?
?
) Create a Pod with definedrequests & limits
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: my-image:1.0.0
resources:
requests:
memory: 64Mi
cpu: 250m
can't be lower
3
limit:
memory: 128Mi
cpu: 500m than requests
) Display Nocle's
memory & CPU consumption
$ kubectl top node
146
!
limits should be
-> Requests and defined for each
wikie
147
=û
PodDisruption Budget.
↳
onil
e
availability?
S
Unfortunately, no wo
-> PDB is a
way to increase application availability
-> PDB
provides protection against voluntary evictions :
es Rolling upgrade
~> Defete a Deployment
~) and no defele a Pod i
148
My configuration:
viey
FretionI API
C
I
can delete because
this Pod, but maxUnavailable 1
:
e-
-
-
vol
Le u nta ry
prement disruptions
149
ËjtwÏ
→
Il
Orly one
disruption
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
.
name: ingress-nginx-controller-pdb
spec:
minAvailable: 1 } how many Pods must always be available
maxUnavailable: 1} how Pods can be evicted
many
selector:
matchLabels:
app: my-app }
match only Pods with label app my app: -
It'salsoposÈ(Ëaabaa
pourcentage : minAvailable: 50%
} Show PDB in
my namespace
$ kubectl get pdb -n my-namespace
150
Quartieet
↳
-> Scheduler uses
requests to make decisions about
anedsboom:
.
↳
ee
adetharea"
.
⑳
U
??
et
⑳
e -
151
-> Depending on
requests and limits defined,
a Pod is
classifiedin a certain QuS class.
ses
3 QuS classes?
⑳- M
⑳anteede
-* Pods are
guaranteed to not be killeduntil
their limits.
they exzeed
152
o
.
Io Burstable
Guaranteed
☆ Request and limites must be equals
in all containers ( even Init containers )
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: nginx
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "20m"
limit:
memory: "64Mi"
cpu: "20m"
154
→"
*
Burstab ✓
☆ At least one container in the Pod must have
a
memory or CPU request defined
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: nginx
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
limit:
memory: "128Mi"
155
?⃝
?⃝
→"
☒
BestE✓
☆ If request and limit are not set ,
156
?⃝
ËIH◦Ï→
> Get QoS class for my pod
-
Pod
157
·en &
mercantal I
everyone
Basic rule in Kubernetes?
to
⑳
-> Scoped by Namespaces
158
-> limited to IP addresses (no domain name)
->
Pre-requisites
Use a CNF plugin which supports NP
⑳
&my-appare -app
~ are
aeroweda e
-
159
Parto?
-
⑪
⑱
tror-rony tremy-roley
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: my-namespace
*
spec: If empty pod Selector
3 select
podSelector:
matchLabels: all Pods in
namespace
role: my-role
policyTypes:
- Ingress
- Egress
ingress:
3
Allow connections
- from:
- podSelector:
matchLabels: to all Pods Labels
role: allow-role
ports: role: allow-role
- protocol: TCP
port: 6379 to communicate to our Pods
160
⑳
-
⑱
tengroly
3
egress: Allow connections
- to:
- podSelector:
matchLabels: for our Pods
role: allow-to-role
ports:
↑
to communicate to Pods
- protocol: TCP
port: 5978 with labels role:allow to role
IreprfPolice
et a propos
de
161
③ Deny all Ingress traffic to our Pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: my-namespace
spec:
podSelector: {} 3 all Pods in
policyTypes: my-namespace
- Ingress
④
Allowall Egress traffic from our Pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
namespace: my-namespace
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- {} 3allow all
162
!
-> In Ingress & Egress, several selectors existe
- Le
pod Selector namespace
Selector ip Block
163
=û
RBAC
↳
:
*
eBased I
Control
Access
->
Useful when wantto control what your users can do
you
for which kind of resources in cluster
your
create
*
list
=>
* get ⑭y update
000
X delete watch
. 164
verbs (
allowed on
specific resources
->
Sets permissions within a
given namespace
Role biggerenre un
&
- -
cluster scoped resources
L namespaced
accross
resources (like Pods)
all namespaces
(Noder non-resource
endpoint
(/health200
S
De certes. igeet
obj
des e
ist e namespace
e
165
ensubject a Role and a
inding
-> Grants the permissions defined in a
Robe to a user
->
A RoleBinding reference Role in the same
namespace
can
any
in a
namespace
-ser binding e -
il ject
a
group ⑧
Service Account ↳
↳Role
->
RoleBinding should be
A defined per namespace
-erights
Ymy-names
e
S
↑P E
↳Binding ->
Role
e
my. User view 166
my. namespace
-
Cluster
RoleBinding
for all
namespaces (cluster wide) -
-
Allow users in the group
any
↓ my-group to read secrets in
any is
S
Y E
RoleBinding) -
eole
read-secret
my, group
rencon
aaggated
in e
167
ftijHowE_Ydlf-RokBinding6-sR@mf.vserread -
in
my namespace namespace
_
apiVersion:
rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: my-namespace
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
-
rend access
to a user
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: my-namespace
subjects:
- kind: User
name: my-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader 168
apiGroup: rbac.authorization.k8s.io
↑P E
RoleBinding) ->
↳terRole
my. User secret. reader
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
3
metadata:
name: secret-reader
* No metadata
namespace in
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
ClusterRole to a
group
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: my-group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
169
nic Cluster Role
puberte
E
Account 1
Service E
↳Binding ->
↳terRole
en read-pods secret. reader
test. ns
① Create a
namespace
test -ns"
③ Create a
RoleBinding that grant secret-reader
to test
my-sa.
$ kubectl create rolebinding my-sa-test-rolebinding -n test-ns
—-clusterrole=secret-reader —-serviceaccount=test-ns.my-sa-test
170
④ Create a
Kubeconfig file for the created
Service Account
$ export TEAM_SA=my-sa-test
$ export NAMESPACE_SA=test-ns
⑤
Execute aubecte commands in the cluster
as the ServiceAccount
171
PodSecurity Policyad
↳
eral te
->
Set of conditions/rules a Pod must follow in order to
-> policy
A can contain rules on:
5
Types
of
volumes
priviledged
P
Pods
Read. only filesystem
bilities
volume Kernel authorized capa
sudo
elevation(=
host Port Privileges
&9 VID /GID used by Pod
Seccomp/AppArmor profiles 172
-> By default, a Kubernetes cluster accepts all Pods
Pol
need i c yaçai sen to enable
#tractice
-> APod is linked to PSP thanks to his Service Account
grants permissions
↑
⑩-inding
⑮P
- -
.
RoleoleBinding namespace
↓ L
cluster wide
- associates a Cluster Role
permissions to a desired SA
173
-> D Only one PSP for a
Pod can be applied
&Non
policies mutable
-o
I-
mutable
-
non
-
it
d
*
·
en↓ 6
⑳
lepoliton't change
non
the Pod
A
trable
⑳pe ↓
mutable
6
policy
barrerie on the Noces
et
174
Fileorder
:vernes
existe
S
-> DesenEten e
·
En
-
re
↓ 6
mulable
↳ ②
if several mirable
SE-
policies matches,
choose one ordered
by name
Calphabetically)
-
175
Parto?
⑪ Define a
policy named "my-policy"that prevents the creation
of privileged Pods
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: my-psp
spec:
privileged: false - Don't allow the creation
seLinux:
3
rule: RunAsAny of privileged Pods
supplementalGroups:
rule: RunAsAny
runAsUser:
other control
rule: RunAsAny
fsGroup:
aspects
rule: RunAsAny
volumes:
- '*' 3
Allow access to all available volumes
②
Create
a ClusterRole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-cluster-role
rules:
- apiGroups:
- policy ④
policy
resources:
- podsecuritypolicies ③
which is a PSP
verbs:
- use -> ⑪ Grant access
resourceNames:
- my-psp
- &
to
my.psp 176
③ Create a
RoleBinding linked to a desired
ServiceAccornt(SA)
# Bind the ClusterRole to the desired set of service accounts.
# Policies should typically be bound to service accounts in a
namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-rolebinding
namespace: my-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
bind
3
my
cluster, role
E 3
- kind: ServiceAccount
name: default
to the SA in
default
namespace: my-namespace my-namespace is
OR
↳
) Create a
RoleBinding linked to all Service Accounts
in
my namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-rolebinding
namespace: my-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts 177
④ Enable PodSecurity Policy admission controller
in an
existing cluster
GKE
178
p
↳ orAisAfficheet
-> By default, when
you deploy a Pod, Kubernetes
pod t ⑳
⑳
↓f . ↓1
⑳ -
*
Node
empty fre
N
↓6
De
I
jese
179
-> Several to handle node pod affinities
ways and
o
Selector
? ?
S
0 .
Affinity ↳Affinity
o S o
o
:Affinity
180
lect
a-
"
À
¥
:
ByÆÜhaÇ%ab
lorunonaNode÷
but you can add
your
our labels to
force ce Pod
181
o
Aspire
-> Similar to NodeSelector but more expressive syntax
↑
match criter;as
182
Aoperators:
ded
In, Exists, Gt, Lt, NotIn, Not Exists
Does
e
↑·I
If the label on a Node
nothing changed,
Pod still runs in the Node
-
183
o
↳Affinity
-> Allow to run a Pod on
the same Noce
⑨
to
.
together
be
t
·
/Pod
/detruey A Pod B
need
pas to en
184
ì
AtiAffinity
o
Pod
->
Allow to run several replicas of a
deployment
on a
different Node
my.apy
1
S 8
⑱ma
1
Wa
t
etri b ute aber to
your
Poals on
rodes
L fone Noceaient the et
185
Parto?
& List all Nodes with their labels
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
schedree
containers:
.
↳S
root thisre
- name: nginx
image: nginx
nodeSelector:
in a Node with given label
node: my-node
⑳ <P
0
It
A
Dyna 186
* Create a Pod with Node Affinity, that will run
label
in a Node that have a
specific
000
if possible
apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-node-affinity
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: app
operator: In
values:
- my-app
containers:
- name: my-container
image: busybox
⑳
A
de
⑩my.app,
187
& Create a pod that will run next to Pod
with label" friend true"
=
apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: friend
operator: In
values:
- true
topologyKey: topology.kubernetes.io/zone
containers:
- name: my-container
image: busybox
itige se re
⑳
↳
4
A It
↳
*
Serre 188
& Create a
Deployment with 3
replicas.
Each Pod must not run in the same Node.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-db
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
#
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
containers:
- image: my-image:1.0
name: my-container
Wmy.ap ⑳ ⑳
↓ Wmga 8
↳my.ap 8
↓
⑱
- 189
Node
↳
-> Pods run on a Node
/
-
->
Overview
ren
↓
190
↳) Container runtime;
Et
sisont
⑪
pull container
j 0
r
↳taie 2
↓unpaçainte
enelle
③
run
the
app
191
Noch lifecycle.
↳
Garead
Node Controller
E
↳un
I-
-
Healthy
to accept
& ready
Pods
- I Memory Pressure?
Id ↳
cres
I ReadySarofpish?
192
↓ .
Alvebeen
"cordoned"
*
*Shearling
Disabled
193
?
?
) Show more
informations about a Node
Iconditions, capacity.o
$ kubectl describe node my-node
194
Node operations
↳
Debug issues on Node
Why?
~
-> Upgrade
*
a Node
scale
down the cluster
Rebootthe Node
000
Fran
-
lable
->
e
I
chedule Taint
->
Evict all Pods
from a Node & mark the Noce
as unschedulable
-
Cordon
<
eaben ⑱
Schedulable
S
-> nake the Node unschedulable
$ kubectl cordon my-node
-
Node Laints are
key:value pairs associated
with an
effect
·chedulel
Pods that don't tolerate the taint can't be schedule on the Node
196
auferrschedulel
Kubernetes avoidschecuded Pods that don'ttolerate
the taint
autre
recute/
Pods are evicted from the Node if they are
already
running, else they can't be
scheduling
?
) Display taint informations for all Noces
3 Unpause a Node
...
spec:
tolerations:
- key: "my-key"
operator: "Equal"
value: "my-value"
effect: "NoExecute"
tolerationSeconds: 6000 3 Delay Pod eviction
entre
car ande cacey taint de
Node
* is not
ready
When? * Node is unreachable
I
* Out of disk
* Network unaivalable 198
000
VorReady eggs. Pending
*
s"
e
When errors occur, debugging Kubernetes
?
Wing
?
e
2.
⑧
Et
Sed
199
⑩bre
information about your Pods
⑲-
⑧
exibroder
↳acha
i mated
*
200
⑳ stuck in Pending status
⑳
A
W
Pod can'tbe
scheduled
E
on a Node
:Solutions
I
Resource
* limitare maybe) than cluster capacities
Delete
* Pods / Clean unused resources in the cluster
AddNodes
*
capacities
Add
* more Nodes
000
201
③cluster
messages error
↳Freeurera quand
et
202
② in
Waiting/ImagePrelBackOff status
po
Cegistry.com/project/my-app:1.
*
e
Questions
I
Image
* name, tag & URL are good?
Image
* exists in the Docker registry?
W Can
you pull it?
Kubernetes
* have permissions to pull the image?
ImagePull Secret
* is
configuredfor secret
registry?
203
1.18
⑤ Kubecte debug
-
1ha
~
->
Run an
Ephemerat Container near the Pod
we wantto debug
&
⑧
L application container
⑲ r
⑭1debug container
de
debug container can see all
204
processes createdby my-pod
⑲De
been restartedmultiple times
EX-TER-E
E
We
L
-
M
⑧
-
-
-
I↳I
-
&II 1 II
%8ma
uses rie
⑳ ⑳ -
okieled
:Solution
I
Define requests
* limits
and
memory
205
⑦ can'tstarto
⑳ ⑳
S
amner
* Error acreating *
⑳
-
--
..-
-
⑰
->
ConfigMap Secret
When
you
link a
Pod to a
Config Map and for a Secret,
⑳Running
don't know why it's not working? but I
You can
simply watch Pod logs in order to
try to understands
Poste
se re
206
mais
restarting over andover so
consired as
unhealthy.
Possibleissues
-
Probe
*
targetis accessible?
*
Application take a
long time to start
⑳access
my
Pod without external an
balancer
0
euget
!x- ↳
and
your computer:
$ kubectl port-forward my-pod 8080 207
-> Andmount a tunnel to a Service
directly:
$ kubectl port-forward svc/my-svc 5050:5555
anciede
Report port are the
$ curl localhost:8080/my-api
208
Zubecee
↳
⑭
1.21
I
hat
0
racine
jete
S
S
en
-
-
-- --
-- --
..- ..-
- -
⑱ ⑱
deprecated API
API
209
?
?
) Convert to
my-ingress with old API v1
3 Convert to latest
my-pod version
210
Tools
↳
I
I Rubeatx I
I &
Manage switch
https://github.com/ahmetb/kubectx
211
I
I-
Rubens
I
Manage & switch
between
namespaces,
https://github.com/ahmetb/kubectx
$ kubens
Switch to a
namespace
$ kubens my-namespace
212
I
↓-
/
stern
↓L
Kubectl logs
under Steroids
https://github.com/wercker/stern
213
I
I
1
Grec
-
1 !
Package manager
for kubecte plugins 3
https://github.com/kubernetes-sigs/krew
bi s
ellation,esare avai a be
$ kubectl ctx
these
$ kubectl ns
as
Display installedplugins
214
$ kubectl krew list
'
Add
scraly private
'
index
scraly
'
in
Install plugin
" "
season
"
in
plugin
"
Upgrade
"
season
215
"
Et
le 9s
ˌȥƥËᵉËÆ*ËÊÆ
LE
https://github.com/derailed/k9s
Launch fr95
$ k9s
Run h 9s in a
namespace
$ k9s -n my-namespace
216
t
LightweightKubernetes
cluster s
https://k3s.io
217
I
I
~
At
/
I skaffold
CC
https://skaffold.dev
Init
your project (and create shaffold, yame)
$ skaffold init
Deploy image
$ skaffold deploy
->
Built-in kubectt since v1.4
55
toto
⑰. mix-in
g
env
replica
⑤ mix-in
Base
=>
?
① Define a
deployment. yame file
apiVersion: apps/v1
kind: Deployment
metadata:
name: kustomize-app
labels:
app: kustomize
spec:
replicas: 1
selector:
matchLabels:
app: kustomize
template:
metadata:
labels:
app: kustomize
spec:
containers:
- name: app
image: gcr.io/foo/kustomize:latest
ports:
- containerPort: 8080
name: http
protocol: TCP
220
② Create a
file calledcustom evr.
yame
apiVersion: apps/v1
kind: Deployment
metadata:
name: kustomize-app
labels:
app: kustomize
spec:
replicas: 1
selector:
matchLabels:
app: kustomize
template:
metadata:
labels:
app: kustomize
spec:
containers:
- name: app
3
env: What we want
- name: MESSAGE_BODY
value: by Kustomize ⑲ to addto
- name: MESSAGE_FROM
value: overlay ‘custom-env’ our base
③ Create a
kustomization. yame file
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base 3 base our
=
deployment
patchesStrategicMerge:
- custom-env.yaml 3 patchs to
apply
⑭ Apply
$ kubectl apply -k /src/main/k8s/overlay/prod
221
I
I
I
I kustomize
Retips
① Set/Change image tag
$ kustomize edit set image
my-image=my-repo/project/my-image:tag
&
Create a secret
a br
magie
j
Le tin"e
222
generated kustomization, yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica.yaml
secretGenerator:
- literals:
- password=toto
name: my-secret
type: kubernetes.io/dockerconfigjson
8 Add a
prefix
and a
suffixin resource's name
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namePrefix: my-
nameSuffix: -v1
223
4
Create a Eustomization, yame which creates a
Config Map
↓ E
configMapGenerator: configMapGenerator:
- name: my-configmap - name: my-configmap2
files: literals:
- application.properties - key=value
⑤
Merge several files in one Config Map
configMapGenerator:
- name: my-configmap
behavior: merge
files:
- application.properties
- secret.properties
generatorOptions:
disableNameSuffixHash: true
eroptions
generators
changebehariet
224
I
/hubescal I
I
Encrypt your
secret
Create a Secret
225
a new
Sealed Secret
appears
-> ⑳
L
secret
my.
sealed secret
-
-
controller
it
unseal
·I
E
9
fere
⑧
my-secret
226
I
I tring
(
Il
-sane cas,
https://github.com/aquasecurity/trivy
W of OS
packages
* Application dependencies
Inpi, yarn, cargo, piper, composer, bundler d
->
Simple and Fast scanner
->
Easy integration for CI
-> Kubernetes
A operator 227
Scan an
image
$ trivy image python:3.4-alpine
& display a
summary report
$ trivy k8s -n default --report summary
228
I
I
~
1
/
I velero
"Backup & Restore Kubernetes
-
resources and
persistentvolumes
https://velero.io
Create a
full backup
$ velero backup create my-backup
Schedule backup
$ velero schedule create daily-backup
—-schedule="0 10 * * *" 229
!
->
Create a
backup before a cluster upgrade
->
Andor it's a
good practice to backup daily/weekly
your resources in sensitive clusters
->
Addrevisions feature in the backup location bucket
230
ËË
ÏËËË
ËÏ ÷Ë¥ ÷ ï÷Ë÷
https://github.com/derailed/popeye
→
Scan cluster and output a
report ( with a score )
→ Several to install it
ways
( lo cally , Docker ,
in the cluster ooo )
list
Scan
Orly a
of specified resources
231
Scan with a
config file
$ popeye —f spinach.yaml
232
I
Iaerno I
~
https://kyverno.io
·
validate
o
generate
o mutate
webhooks?
-> Kyverno installs several
NAME WEBHOOKS
AGE
validatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-policy-validating-webhook-cfg 1
52s
validatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-resource-validating-webhook-cfg 2
52s
233
↳ ⑳
F
I
#
mai nte
E
instant
my.hube cluster ·
S
en
S
234
?
?
) Create a
policy that disallow deployments for Pods in
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-default-namespace
spec:
validationFailureAction: enforce
rules:
- name: validate-namespace
match:
resources:
kinds:
- Pod
validate:
message: "Using \"default\" namespace is not
allowed."
pattern:
metadata:
namespace: "!default"
- name: require-namespace
match:
resources:
kinds:
- Pod
validate:
message: "A namespace is required."
pattern:
metadata:
namespace: "?*"
235
par
crionraan e
) Create a
policy that creates a
Config Map in all
) Create a
policy that adds label
my awesome.app
to Pods,
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-label
spec:
rules:
- name: add-label
match:
resources:
kinds:
- Pod
- Service
- ConfigMap
- Secret
namespaces:
- team-a
mutate:
patchStrategicMerge:
metadata:
labels:
app: my-awesome-app
237
) Display existing policies for all namespaces
$ kubectl get cpol -A
NAME BACKGROUND ACTION READY
add-label true audit
disallow-default-namespace true enforce true
0
en graphical
https://github.com/Kyverno/policy-reporter
238
ËË Kubernetes
.
¥
È☐|
GENERAL
oFirst2o2J
0 own logo
CLASS
[ INGRESS
æwresouræIngÎs↳
annotation ingress class
.
239
⑰ KUBECTL DEBUG
Ma
7
/Aim emerat
I
to container near the
Pod we wantto debug
How to
-etainer
e
Share process
a
namespace name
with a container inside the Pod
debug
processes
container can
createdby
see all
my-pod -
240
-
HORIZONTAL POD AUTOSCALER -
-
1
↓
-i
biketo define,
scale & de
e SPA
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
I
name: my-deploy
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
behavior:
scaleUp:
policies:
- type: Percent
value: 90
periodSeconds: 15
scaleDown:
policies:
I
# scale down 1 Pod every 10 min
- type: Pods
↳gan
value: 1
periodSeconds: 600
gant
↑
⑳ 241
INUTABLE SECRET & CONFIGMAP
-
/
>
Le
Pre. requisites active feature gate
I
-
Immutable EphemeralVolumes:true
How?
immutable:true
⑰ KUBECTL RUN -
/
I
-
I
Rubectl run command now create only a Pod
usage
↳mone
command - one
242
r
e
nagerdate
INUTABLE SECRETO CONFIGMAP-
Me
La
transitive
by date ristone.
I
Protect
downtime &
against accidental updates
optimize performance
Control Plane don't have to
that
because
check updates.
can cause
-
apiVersion: v1
kind: ConfigMap
metadata:
name: my-cm
immutable: true
data:
my-key: my-value
243
Ë
beËT3
°""""""|
"" " " "" " "" "
" ""
"" " ""
"°
244
?⃝
-
SECCOMP -
M GA
I
I
-
app
make
o Provide sandboxing
0 Reduce the actions that a
process can
perform
d
I
Keep your Pods secure
apiVersion: v1
kind: Pod
metadata:
name: my-audit-pod
I
labels:
app: my-audit-pod
spec:
L
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/audit.json
containers:
- name: test-container
image: hashicorp/http-echo:0.2.3
args:
- "-text=just made some syscalls!"
securityContext:
allowPrivilegeEscalation: false
245
ËïËÜ
0
Kubernetes
1. 20
.
GENERAI
0 42 enhancement
"""⇐ "
[
Docker support in
%
%
Ku belet is now deprecated
246
TIMEOUT HANDLING
EXEC PROBE
/
-
I
Fixed bug in Liveness/Readiness'exec proble
I
time outs ↑
-
containers:
- name: my-container
image: my-image:1.0
↳
livenessProbe:
exec:
command:
I - cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
247
- B
STARTUP
E
I
I
I
Hold off all the other probes until the Pod
solution
A for slow starting Pods.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:1.0
ports:
- name: liveness-port
containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
I
net
*
failureThreshold: 1
periodSeconds: 10
startupProbe:
haie
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 30
periodSeconds: 10
248
Kuberneee
↳
GENERAL
·a
-
ofcommunication
has
charge et
%pspraity A
povi c y e
corzant t
I-I
I
available and
fully. functional until 1.25.
https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-
present-and-future/
249
NoNSTADE
I Introduced in
Generally Available
1.4, Gronjobs
(GA).
to thenewenJobcontrentes.
are
finally
I
le
KUBECTL-
/ ↑
-
I
$ kubectl get <resource-type> <your-resource>
-o yaml
I
will no longer display managed fields
which makes reading and
parsing the output
much easier
en
250
t
" "" ""
""" " " "" " " "" "
apiVersion: v1
kind: Pod
metadata:
name: my-pod
annotations:
kubectl.kubernetes.io/default-container:
my-container-2
spec:
containers:
- name: my-container
,
image: my-image
- name: my-container-2
image: my-image-2
1
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done
251
Kuberneee
↳
GENERAL
*
-
· 3 features deprecated
o Several APIs have been removed
Migration Plugin
/ sort ↑
I-I
Lubecte helps you to migrate older
252
MECHANISM
Ate
-
/
I
Kubernetes returns warning messages
when you ↑
I
use a deprecatedAPI.
LABELSentefeedee
iBe
/
I
By defan
↑
I
to have identified tabels.
any
to all
↳) namespace name
namespaces.
leet
with
can beused
is selector
any
253
Kuberneee
↳
GENERAL
-
-
0 enhancements
47
-
POD AUTOSCALER
-INT AL
I
GA
I
autoscating/v2 stable moves to GA
API
↓
and autoscaling/v2betal has been deprecated.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef: ⑳
apiVersion: apps/v1
I ↑
kind: Deployment
name: my-deploy
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
behavior:
scaleUp: ⑳
policies:
- type: Percent
value: 90
e periodSeconds: 15
254
⑰ KUBECTL DEBUG
a
7
/intohemera
I
the
Selainer
nea
de
↳tanecansee
a
255
TRÈS
T
Welcome to the new
field ttl Seconds After Finished
|
in Jobs that will delete the Job it has
Spec once
""""""""""""ᵗ
No need to schedule Cleanup of Jobs
any more µ
apiVersion: batch/v1
kind: Job
metadata:
name: my-job-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
metadata:
name: my-job-with-ttl
spec:
containers:
- name: busybox
image: busybox
256
Mtf
POËT -
" "" " """ "" " "" & " " "
removed 1.25
|
in .
fromeverhavingdangerouscapabilities.TT
"°^ÛᵐW
-
fAll clusters
at the same
now
time
support
.
IP v4 and IP v6
duae.stackcap.se?F-pE..I
www.a.yg.e.UW
To use this feature ,
Nodes must have IP v4 &
IP v6 network interfaces .
So
you
must use a
257
?⃝
e
↳
labre
rease
-
of con
0 46 enhancements
renoves
pantin
po
I
C
* 0O
o
Docker supportin Kubelet is now removed
doit
. paricjou DockerPducer en
I
258
ÎᵐW
§;ÏÏÏ ÏÏ"
Aim to configure startup / Livenessl Readiness prob
apiVersion: v1
kind: Pod
metadata:
name: etcd-with-grpc
spec:
containers:
- name: etcd .
image: k8s.gcr.io/etcd:3.5.1-0
Ls
command: [ "/usr/local/bin/etcd", "--data-dir",
"/var/lib/etcd", "--listen-client-urls", "http://
0.0.0.0:2379", "--advertise-client-urls", "http://
127.0.0.1:2379", "--log-level", "debug"]
ports:
- containerPort: 2379
livenessProbe:
grpc:
port: 2379
initialDelaySeconds: 10
CLASSO.tl#BAAE-
[
|
Newfield
Y
Load Balancer Class
ofloadBalanceryouwant.LY
259
,Ë
Welcome to a new metric in Kerbelet :
Èᵐᵐᵐ
ÏD . Ë ËË Your
ces
260
BETA APE
. -
/ ↑
I
Alpha APIs are disabledby
default can and be
I
enabledor disabledwith feature gales.
-
TE SATOKEN - -
/ P
.
I
You can now create additional tokens
API
I
for a Service Account (SA).
apiVersion: v1
kind: Secret
metadata:
name: my-sa-secret
annotations:
kubernetes.io/service-account.name: mysa
-
type: kubernetes.io/service-account-token
---
- I
apiVersion: v1
kind: ServiceAccount
metadata:
I name: read-only
n↑ namespace: default
secrets: my-sa-secret
261
Kubernee
↳
GENERAL
se
and noveltes
Renoves
soit
policy
↳PaniniI
but don't replacement
I
complexity panic, a
tamission.
https://kubernetes.io/docs/tasks/configure-pod-
container/migrate-from-psp/
262
M
Alpha
⑪hefach vanssone*
e
I
7
e
vuenerabilities due to excessive privileges
to Pod.
given a
Welcome to a
long awaited feature8
supportin Kubernetes B
user
namespaces
I
⑲1
I Pre.
-
requisites active feature gate
UserNamespacesSupport true
:
↳ici
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
hostUsers: false
containers:
- name: my-container
image: my-image
b
263
POD SECURITY
/ Atte
-
et
↳berparober ange
Ma
CONTAINER
~CHECKPOINTING
/
- >
I
This feature allows to taking a
↑
snapshot of a
running container,that
I
inte
can be transferred to another Node
-
S
I
-
!I I
D
I
264
RETRIABLE
---M
JOBS
I
> i
i
determine if a Job should be retried
1caoface
en
KUBECTL DEBUG
Atte
⑳im
-
toemerat
I
container near the
Pod we wantto debug
rare
1 d
265
e
↳
Ma
-
iS RUPTiON BUDGET
Po * -
P
& HEALTHY PODS
↑
PDB
only take in account Running Pods but
I
I
a Pod can be Running andnot
Ready.
This new
feature allow to prevent an eviction.
Itrue, thyPodErictionPolicy=
266
--
PROVisioN FROM VoLUNE SNAPSHOT
/
ACROSS NANESPACES
I
-
Ain to create a
Persistent Volume Claim
namespaces.
inri
I .....
i Post
'
↓
Namespace 1
I
i
in
----
L
snapshot
Namespace 2
↓
267
M
-
BALANCER SERVICE -
/
Dr
I
Aim to create a service
of type equals ↑
to load Balancer that has different port I
I
definitions with different protocols.
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
!
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: tcp
port: 5555
targetPort: 5555
protocol: TCP
- name: udp
port: 5556
targetPort: 5556
protocol: UDP
args:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
268
PRIVMMGA
WINDOWS
,
ivolen toline prilegrd cdaies 1:
nupentD .
I
PEROVAL T
F
'!AUIHAL
catcinod ic antsiber nutiom for kubelets
,
netmancoprn
Cn in
6
269
Kuberneee
↳
renrich "chiesribeset
MOVING
REGISTRY
↳sinonI
entoam entBenStres enteintraetiBnoasnes e.
closed to
you.
No matter Cloudprovider
you
use
270
⑰GROUP
SNAPSHOT
Ma
- >
/
I
mase -one-aramil
Aim to create snapshots
I
across
!
Pre. requisites active flag
->
I I
↳Dearn
encresentet
CREATE_DELETE_GET_VOLUME_GROUP_SNAPSHOT
271
le
Access for pr
CONCEPOD
I
Beta
I
Allow to restrict volume access
I
I
to a
single Pod.
-
ReadWriteOnzePod true
=
hist
How to
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOncePod
resources:
requests:
storage: 1Gi
⑭-
272
eRioD
CoNFicuRATion
I
I
The
.
ROBES
II
I
new
I
in the liveness Probe ask to Kubelet
to wait
before to kill the container.
How to
1-
apiVersion: v1
kind: Pod
…
spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 60
periodSeconds: 20
terminationGracePeriodSeconds: 60
273
e
ORDINAL FOR
⑰START
I
I Now can
-
define
STATEULSET
i
I
you I
I
a
Statefr2 Set .
I
Pre. requisites active
feature gate
-
-
StatefulSetStart Ordinal true
I
=
How to
"
274
Dotatione
↳
se 1.20
.ws
⑳
o
-> Docker supportin Kubelet is now deprecated
& will be removedin a
future release
if all
i
·08
si Runtimel-
-> Images still can
be/britted locally with
Loche
& through &
2I/CD pulled by Kubernetes 275
iDm%fE.pro/uces0CI(0penGntainerInitiativeJJ
→ Huber relies Will use container d as CRI runtime
PreviousarchitectureJ.'Hk
LI support
Problem se
ÏüÎ
→
-
gg Docker includes so
many components ( networking ,
volumes UX enhancement )
, o»
erarchi
let containerd
rtine
i s srsi b l e for peet
277
Glossary
↳
> C5 - CronJob
278
me
Collection:
I:I
-
Avilable
d on
Paperback on Amazon
avera
-
Youtube s
-
https://www.youtube.com/AurelieVache
279
CEI?
Dev Rel GᵐÊ Dev Ops + 16
years
•
xp
ÆŒÆ→ Ê •*
☒ as Google Developer Expert in Cloud technologies
#ïÏ¥ÆÏËËËËË!ÏËÏ¥ËÊÏËËÏËÎËÏ
IË¥ÇËÆ
CNCF Ambassador !Ïü☒ÆË Docker
Æ☒¥¥r¥Ï
The:-#Eh
•
Ë⇐⇐¥ËŒ¥Y Captain
Duchess France / Oomen in tech association
Technica writer -
Speaker -
Sketch noter #Ï
Contact ne !•
Aurélie Vache -
Abstract
Understand ing Knbernetes can be
difficult or time .
consuming .
in arder to
Ery to explain the technology in a Visual
way .
I¥d :
Kubernetes Components
Resources
Concrete example
Tips & Tools