Professional Documents
Culture Documents
Wskubernetes
Wskubernetes
AT Computing
onderdeel van Vijfhart Groep
v1a – h01 – 1
Introduction containers (1)
Conventional production environment
● all applications use same ‘ecosystem’
◗ filesystem host system
◗ additional storage
application application application
◗ network
◗ process numbers (PIDs)
/
◗ user identities
m
hardware resources
ste
◗
sy
fle
● disadvantages
◗ laborius (de)installation
◗ compromised application might network
access all data
manipulate entire network stack
compromise other applications
◗ application might overload resources
v1d – h11 – 2
Introduction containers (2)
Container: isolated ecosystem for application
● image with own mini-filesystem containing
◗ programs
host system
◗ configuration files
◗ data files application application application
// // //
◗ ...
● own network /
m
● own PID numbers
s te
sy
fle
● cpu/memory/disk limitations
v1a – h01 – 3
Introduction containers (3)
Image
● contains mini-filesystem and metadata
● developer (‘dev’)
◗ builds image for application
◗ ships image to registry
● operations (‘ops’)
◗ pulls image from registry
◗ uses image to activate container
in any environment: test, acceptance, production
on any operating system: Linux, Windows, macOS
on any platform: physical host, virtual machine, cloud
v1d – h11 – 4
Workshop “Kubernetes”
Docker summary
v1a – h01 – 5
Docker summary
Docker container
● run application in isolated environment
◗ own mini- filesystem
◗ own network
◗ own private PID numbers
◗ own mounted filesystems
◗ own users
◗ separate root privileges
◗ cpu/memory limitation
● lightweight
● simple
● large community (lots of images)
v1a – h01 – 6
More info: www.docker.com
Docker summary – virtual machines versus containers
Virtual machine Docker container
● application ● application
● libraries ● libraries
● full command set ● limited command set
● full operating system (kernel) ● no private kernel
● container process is docker
vm 1 vm 2 native process for host! command
app 1a app 1b app 2a container 1 container 2
local
local
libraries libraries local
app
app app 1a app 1b app 2a
app Docker
guest kernel guest kernel daemon
local
libraries libraries
hypervisor libs
$ cat index.html
<h1> Message from container! </h1>
● list images
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
atcomp/apachetest latest 5d3b567581df About an hour ago 268.1 MB
docker.io/ubuntu latest 104bec311bcd 3 months ago 128.9 MB
v1a – h01 – 9
Docker summary – run from custom image
Start container from custom image
● run custom container publish port detached: run in background
v1a – h01 – 10
Docker summary – run container
Run Docker image
● add new filesystem layer on top of image
◗ read-write
◗ stores all modifications during execution
container
● architecture aims at stateless applications
◗ no need to preserve data between sessions application
◗ example: static website with Apache
container layer
read-write
● container
docker run custom image
◗ preferably runs one application read-only
◗ might run small script (minutes) or
server task (months)
v1a – h01 – 11
Docker summary – registries
Docker registry: image store
● server(s) containing images registry
registry
◗ multiple versions
● possibilities
◗ Docker Hub (100,000+) docker
docker push
push docker
docker pull
pull
◗ in-company registry atcomp/apachetest
atcomp/apachetest atcomp/apachetest
atcomp/apachetest
● images can be
◗ pushed
stored with docker push
custom
custom custom
custom
◗ pulled image image
image image
explicitly with docker pull
implicitly on initial use with docker run
v1a – h01 – 12
implicitly on initial use with FROM in Dockerfile
Workshop “Kubernetes”
Introduction
v1a – h01 – 13
What is Kubernetes?
Kubernetes (‘helmsman’) a.k.a. K8s
● combines various hosts into cluster
◗ scalability
◗ reliability (failover)
● orchestrates containers
◗ activate
◗ monitor
◗ terminate
v1a – h01 – 14
Kubernetes orchestration
Kubernetes orchestration
● introduced in 2014, inspired by Google's Borg
host host host
A B C
● maintained by
Cloud Native Computing Foundation (CNCF)
● open source
● concept
◗ cluster needs at least one head node,
formerly called master node head
◗ other hosts in cluster are worker nodes host host host
only run container instances A B C
formerly called minions
cluster
v1a – h01 – 15
User interface
User interfaces
● command line interface: kubectl subcommand object [options]
◗ subcommands
create, modify and delete objects: create, apply, delete
query current state of objects: get, describe
other actions, like: scale, logs, exec, .....
v1a – h01 – 16
Kubernetes architecture
control plane head worker node
kubelet e
controller tim
scheduler ru
n
manager i ner
nta
kube-proxy co
kubelet
RESTful
API
etcd
kube-proxy
kubectl
kubectl
(client)
(client)
● API server: controls entire cluster ● kubelet:
● scheduler: schedules containers (via pods) on nodes ◗ pod startup & monitoring
● controller manager: controls required number of replicas ◗ resource management
● etcd: distributed key-value store to maintain ● kube-proxy: container exposure to
v1a – h01 – 17 current cluster state network & load balancing
Pods
Pod
● conceptually smallest unit, running one application
or
● activate pod via manifest file apa.yml
apiVersion: v1
$ kubectl create -f apa.yml kind: Pod
pod/apatest created metadata:
name: apatest
spec:
● verify pod status and delete pod containers:
- name: apacont
$ kubectl get pods image: atcomp/apachetest
NAME READY STATUS RESTARTS AGE ports:
apatest 1/1 Running 0 82s - containerPort: 80
restartPolicy: Never
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE ....
apatest 1/1 Running 0 83s 10.244.1.254 hostb
role
role
rolebinding
rolebinding
v1a – h01 – 21
Kubernetes objects (2)
Kubernetes object
● has desired state and actual state (current state)
◗ control plane continuously tries to match actual state
to desired state
v1a – h01 – 22
Kubernetes objects (3)
Kubernetes object
● can be created by using
◗ command line arguments
or
◗ manifest file
kubectl create -f manifestfile (create)
kubectl apply -f manifestfile (create/change)
v1a – h01 – 23
Labels
Object labels
● every object has unique name
● additionally, labels can be assigned apa.yml
to be used for apiVersion: v1
kind: Pod
◗ selection on command line with -l flag metadata:
# kubectl get pods -l app=webserver name: apatest
labels:
app: webserver
# kubectl delete all -l app=webserver spec:
....
◗ refer from one object to another object
apa-svc.yml
e.g. to assign Service object to Pod object apiVersion: v1
kind: Service
metadata:
● view object labels name: webservice
spec:
# kubectl get pods --show-labels ports:
NAME READY STATUS .... LABELS - port: 80
apatest 1/1 Running app=webserver selector:
app: webserver
v1d – h11 – 24
Namespaces
Namespaces: subdivide cluster into various virtual clusters
● per project, per application, per developer team, per department, per ....
$ kubectl create ns devel
namespace/devel created
$ kubectl get ns
NAME STATUS AGE
default Active 278d
devel Active 14s
....
● allow
◗ certain actions on certain objects only – defined per user
◗ separate scope for object names
◗ limitation on resource utilization (cpu, memory, storage, number of pods, ...)
v1a – h01 – 25
Workshop “Kubernetes”
Controllers
v1d – h11 – 26
Controllers – an introduction
Controllers controller
scheduler
manager
● pod ('naked pod', 'bare pod') is not self-healing
API server
apiVersion: apps/v1
● part of controller manager in control plane kind: somecontroller
metadata:
name: sleeper
....
● manifest file of controller requires spec:
◗ controller-specific definitions controller-specific stuff...
template:
◗ template of pod (to be controlled) metadata:
name: snorepod
spec:
containers:
- name: snorecont
image: ubuntu
command: ["sleep", "60"]
v1a – h01 – 27
Controller types
Controller types
● ReplicaSet (rs) – preferably combine with Deployment
◗ pod replicas and restart of failing pods
● Deployment – recommended
◗ pod replicas by using ReplicaSet
◗ rolling updates and rollbacks
v1a – h01 – 28
Deployment – create
Deployment object apa-deploy.yml
apiVersion: apps/v1
● definition implies ReplicaSet and Pod kind: Deployment
metadata:
● create deployment name: apadep
labels:
$ kubectl create -f apa-deploy.yml app: webserver
deployment.apps/apadep created spec:
replicas: 3
$ kubectl get deployment selector:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE ...
apadep 3 3 3 3 matchLabels:
app: apapod
$ kubectl get rs template:
NAME DESIRED CURRENT READY AGE metadata:
apadep-7b6fc56c77 3 3 3 18s labels:
app: apapod
$ kubectl get pods spec:
NAME READY STATUS RESTARTS ... containers:
apadep-7b6fc56c77-6ldcv 1/1 Running 0
apadep-7b6fc56c77-bqxhr 1/1 Running 0
- name: apacont
apadep-7b6fc56c77-hk7qp 1/1 Running 0 Deployment
Deployment
image: atcomp/apachetest:1.4
image:
image: ubuntu:16.10
ubuntu:16.10 ports:
- containerPort: 80
ReplicaSet
ReplicaSet
replicas:
replicas: 33
image:
image: ubuntu:16.10
ubuntu:16.10
v1a – h01 – 29
Deployment – rolling update
Rolling update
● executed when pod specification changes, like new image
$ kubectl set image deployment/apadep apacont=atcomp/apachetest:1.5
deployment.apps/apadep image updated
v1a – h01 – 30
Deployment – scaling
Deployment – scaling
● modify number of replicas
# kubectl scale --replicas=2 deployment/apadep
deployment.extensions/apadep scaled
Storage
v1d – h11 – 32
Kubernetes Volumes
Storage
● stateless pod – preferred!
◗ no need to preserve data
◗ container storage (filesystem) on local disk host host host
A B C
● stateful pod local local
v1a – h01 – 33
Volumes
apiVersion: v1
Volumes kind: Pod
metadata:
● have specific type name: ....
spec:
containers:
- name: ....
● have explicit lifetime image: ....
volumeMounts:
(life span of pod, permanent, ...) - name: myvol
mountPath: /mnt
volumes:
● in pod manifest - name: myvol
volumetype
spec.volumes:
provided volumes of certain type
spec.containers.volumeMounts:
mount point in container
v1a – h01 – 34
Volumes – shared between containers in pod
pod
apadyn-pod.yml
apiVersion: v1
Shared volume per pod kind: Pod
metadata:
● pod may consist of various containers name: apadyn
labels: volume
● every container uses its private filesystem app: webserver
– when container in pod crashes: data loss spec:
initContainers:
– containers do not share private filesystem - name: apaprep
image: alpine
command:
solution: emptyDir volume - sh
- -c
◗ volume within pod for all its containers
- echo Apache started at $(date) >
/mnt/index.html
◗ initially empty volumeMounts:
- name: sharedhtml
◗ life span of volume is life span of pod mountPath: /mnt
containers:
◗ example: - name: apacont
image: atcomp/apachetest
$ kubectl create -f apadyn-pod.yml ports:
- containerPort: 80
$ kubectl get pod/apadyn -o wide volumeMounts:
NAME READY STATUS ... IP NODE - name: sharedhtml
apadyn 1/1 Running 10.244.2.239 hostc mountPath: /var/www/html
volumes:
$ curl 10.244.2.239 - name: sharedhtml
Apache started at Wed Aug 14 09:45:30 UTC ... emptyDir: {}
v1a – h01 – 35
Volumes – shared between pods on same node
hostpath-pod.yml
Persistent shared volume on node apiVersion: v1
kind: Pod
● persistent storage in filesystem of host metadata:
name: apahost
labels:
◗ notice: only host on which pod is created! app: webserver
spec:
◗ read/write access nodeSelector:
kubernetes.io/hostname: hostc
solution: hostPath volume containers:
- name: apatest
image: atcomp/apachetest
● example: run pod on specific host ports:
- containerPort: 80
$ kubectl get nodes --show-labels volumeMounts:
NAME STATUS ROLES ... LABELS - name: hosthtml
hosta Ready master kubernetes.io/hostname=hosta mountPath: /var/www/html
hostb Ready <none> kubernetes.io/hostname=hostb volumes:
hostc Ready <none> kubernetes.io/hostname=hostc - name: hosthtml
hostPath:
path: /var/www/html
$ kubectl create -f hostpath-pod.yml
type: Directory
$ kubectl get pod/apahost -o wide host
NAME READY STATUS RESTARTS AGE IP NODE
apahost 1/1 Running 0 19m 10.244.2.240 hostc C
v1a – h01 – 36
Volumes – shared between nodes
cloud-pod.yml
External storage apiVersion: v1
kind: Pod
● can be metadata:
name: datapod
◗ network-based (e.g. NFS) spec:
containers:
◗ cloud-based (e.g. Google, Amazon or Azure) - image: alpine
name: mycont
volumeMounts:
- name: allmine
● example: run pod on GCE Persistent Disk mountPath: /mydisk
volumes:
◗ create Persistent Disk - name: allmine
gcePersistentDisk:
# gcloud compute disks create \ pdName: my-pd
--size=100GB --zone=europe-west4-a my-pd fsType: ext4
v1a – h01 – 37
Persistent Volumes – static provisioning (1)
pod
Persistent volumes – static allocation
● managed by
◗ PersistentVolume (PV) volume
piece of storage provisioned by administrator
volume
example types: NFS, iSCSI bind
PersistentVolumeClaim
◗ PersistentVolumeClaim (PVC)
request for storage by user
to be mounted in pod
specific properties can be defined, like
size, access mode, performance, ....
PersistentVolume
● binding of PVC to PV
◗ 1-to-1 mapping NFS cloud
iSCSI storage
◗ PVC state 'Pending' if no suitable PV available
v1a – h01 – 38
Persistent Volumes – static provisioning (2)
pub-pv.yml
Example persistent volume apiVersion: v1
kind: PersistentVolume
● create PV of 1GiB based on NFS metadata:
name: pub-pv
● request PVC of 500MiB spec:
storageClassName: nfspool
accessModes:
- ReadWriteMany
$ kubectl create -f pub-pv.yml capacity:
persistentvolume/pub-pv created storage: 1Gi
nfs:
$ kubectl get pv server: nasi
NAME CAPACITY ACCESS ... STATUS CLAIM path: /nfs/Public
pub-pv 1Gi RWX Available readOnly: false
pub-pvc.yml
$ kubectl create -f pub-pvc.yml apiVersion: v1
persistentvolumeclaim/pub-pvc created kind: PersistentVolumeClaim
metadata:
name: pub-pvc
$ kubectl get pvc spec:
NAME STATUS VOLUME CAPACITY ACCESS... storageClassName: nfspool
pub-pvc Bound pub-pv 1Gi RWX accessModes:
- ReadWriteMany
resources:
$ kubectl get pv requests:
NAME CAPACITY ACCESS... STATUS CLAIM storage: 500Mi
pub-pv 1Gi RWX Bound default/pub-pvc
v1a – h01 – 39
Persistent Volumes – static provisioning (3)
Example persistent volume – cont'd apiVersion: apps/v1
pub-deploy.yml
kind: Deployment
● create deployment with pod using PVC metadata:
name: pubdep
as volume labels:
app: pubdep
spec:
● pod pending as long as PVC pending replicas: 1
selector:
matchLabels:
app: pubpod
template:
metadata:
$ kubectl create -f pub-deploy.yml labels:
deployment/pubdep created app: pubpod
spec:
containers:
$ kubectl get pod - name: sleepcont
NAME READY STATUS ... image: ubuntu
pubdep-5874f6fbd6-qvndg 1/1 Running command: ["sleep", "3600"]
volumeMounts:
$ kubectl exec -it pubdep-..-qvndg bash - name: pubstore
root@pubdep-5874f6fbd6-qvndg:/# ls -l /public mountPath: /public
.... volumes:
drwxrwxrwx+ ... 4096 Jun 22 2018 Documents - name: pubstore
drwxrwxrwx+ ... 4096 Feb 11 2016 Music persistentVolumeClaim:
drwxrwxrwx+ ... 4096 Dec 17 2016 Photos claimName: pub-pvc
v1a – h01 – 40
Persistent Volumes – dynamic provisioning
pod
Persistent volumes – dynamic provisioning
● managed by StorageClass (SC)
◗ dynamically allocates storage
volume
when claim (PVC) issued
volume
bind
◗ uses provisioner (volume plugin) for allocation PersistentVolumeClaim
Networking
v1d – h11 – 42
Kubernetes networking
Communication possibilities
● container-to-container in same pod
◗ via localhost (loopback interface) pod
◗ port numbers may conflict
network
between applications
NAT
● pod-to-external
◗ via Network Address Translation (NAT)
● external-to-pod
◗ what is IP address of destination pod?
● pod-to-pod
◗ what is IP address of destination pod?
v1a – h01 – 43
Services – IP address of pod
Example: access via dynamic IP address
● create deployment with 2 pod replicas apa-deploy.yml
$ kubectl create -f apa-deploy.yml apiVersion: apps/v1
deployment.apps/webserver created kind: Deployment
metadata:
$ kubectl get pods –o wide name: webserver
NAME READY ... IP NODE
webserver-8654d77c94-928tw 1/1 10.244.1.130 hostb
labels:
webserver-8654d77c94-ggv8c 1/1 10.244.2.64 hostc app: apa
spec:
$ curl 10.244.1.130 replicas: 2
<H1> Welcome from container! </H1> template:
metadata:
labels:
● disadvantages app: apa
spec:
◗ no load balancing containers:
- name: apa
◗ when pod terminates, after restart it probably image: atcomp/apachetest
gets other IP address ports:
- containerPort: 80
$ kubectl delete pod/webserver-....-928tw
$ kubectl get pods –o wide
NAME READY IP NODE
webserver-8654d77c94-7br4b 1/1 10.244.1.131 hostb
webserver-8654d77c94-ggv8c 1/1 10.244.2.64 hostc
v1a – h01 – 44
Services – internal access
Service object
● gets static virtual IP (VIP) address,
though dynamically assigned od o d
p p
◗ stable IP address to reach
mortal pods 10.244.2.64 10.244.1.130 10.244.1.131
targetport
◗ load balancing service
10.97.15.1 selector
● contains selector
◗ refers to label of pod to attach to o d port
v1a – h01 – 47
Services – external access
External access to pod
● requires service type Nodeport
(implies service type ClusterIP) o d od
p p
10.244.2.64 10.244.1.131
● port range 30000-32767
targetport
specify with keyword nodePort
dynamically assigned when
service
10.97.15.1 selector
keyword nodePort omitted
port
od
p type
ClusterIP
type
NodePort
nodeport
apa-svcn.yml
● access from outside cluster apiVersion: v1
kind: Service
metadata:
anyhost$ curl hosta:32123 or hostb or hostc name: webserv
<H1> Welcome from container! </H1> labels:
app: apa
● access from inside cluster (similar to ClusterIP) spec:
type: NodePort
$ kubectl run mypod -it --restart=Never ports:
- port: 80
--image=alpine -- bash nodePort: 32123
root@mypod:/# curl webserv protocol: TCP
<H1> Welcome from container! </H1> selector:
v1a – h01 – 49
app: apa
Workshop “Kubernetes”
Questions?
v1a – h01 – 50