You are on page 1of 50

Workshop “Kubernetes”

AT Computing
onderdeel van Vijfhart Groep

v1a – h01 – 1
Introduction containers (1)
Conventional production environment
● all applications use same ‘ecosystem’
◗ filesystem host system
◗ additional storage
application application application
◗ network
◗ process numbers (PIDs)
/
◗ user identities

m
hardware resources

ste

sy
fle
● disadvantages
◗ laborius (de)installation
◗ compromised application might network
access all data
manipulate entire network stack
compromise other applications
◗ application might overload resources
v1d – h11 – 2
Introduction containers (2)
Container: isolated ecosystem for application
● image with own mini-filesystem containing
◗ programs
host system
◗ configuration files
◗ data files application application application
// // //
◗ ...

● own additional storage image network image network image network

● own network /
m
● own PID numbers
s te
sy
fle

● own user identities network

● cpu/memory/disk limitations
v1a – h01 – 3
Introduction containers (3)
Image
● contains mini-filesystem and metadata

● needed to start container on destination host

● developer (‘dev’)
◗ builds image for application
◗ ships image to registry

● operations (‘ops’)
◗ pulls image from registry
◗ uses image to activate container
in any environment: test, acceptance, production
on any operating system: Linux, Windows, macOS
on any platform: physical host, virtual machine, cloud

v1d – h11 – 4
Workshop “Kubernetes”

Docker summary

v1a – h01 – 5
Docker summary
Docker container
● run application in isolated environment
◗ own mini- filesystem
◗ own network
◗ own private PID numbers
◗ own mounted filesystems
◗ own users
◗ separate root privileges
◗ cpu/memory limitation

● lightweight
● simple
● large community (lots of images)

v1a – h01 – 6
More info: www.docker.com
Docker summary – virtual machines versus containers
Virtual machine Docker container
● application ● application
● libraries ● libraries
● full command set ● limited command set
● full operating system (kernel) ● no private kernel
● container process is docker
vm 1 vm 2 native process for host! command
app 1a app 1b app 2a container 1 container 2
local
local
libraries libraries local
app
app app 1a app 1b app 2a
app Docker
guest kernel guest kernel daemon
local
libraries libraries
hypervisor libs

host kernel host kernel


hardware hardware
Virtual machines Docker containers
v1a – h01 – 7
Docker summary – run from base image
Start container from base image
● example: run command in base image ubuntu from Docker registry
image overruling command

$ docker run ubuntu ps -f


UID PID PPID C STIME TTY TIME CMD
root 1 0 0 08:59 ? 00:00:00 ps -f
$
◗ container terminates when process in container terminates

● example: run command in base image ubuntu interactively


interactive tty also default for this image

$ docker run -it ubuntu bash


root@f8d91cbaca03:/# ps -f
UID PID PPID C STIME TTY TIME CMD
root 1 0 1 09:12 ? 00:00:00 bash
root 11 1 0 09:12 ? 00:00:00 ps -f
root@f8d91cbaca03:/# exit
$
v1a – h01 – 8
Docker summary – build custom image
Build custom image
● specify own modifications in file Dockerfile
$ cat Dockerfile
FROM ubuntu
RUN apt-get update && apt-get install -y apache2
COPY index.html /var/www/html/index.html
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]

$ cat index.html
<h1> Message from container! </h1>

new image directory containing Dockerfile


● build custom image and other files needed in image

$ docker build -t atcomp/apachetest .


Successfully built 5d3b567581df

● list images
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
atcomp/apachetest latest 5d3b567581df About an hour ago 268.1 MB
docker.io/ubuntu latest 104bec311bcd 3 months ago 128.9 MB
v1a – h01 – 9
Docker summary – run from custom image
Start container from custom image
● run custom container publish port detached: run in background

$ docker run -p 8080:80 -d atcomp/apachetest


c8eda1f3a6734304195ab3e47280ee77c719fa5c365ba428bc2037e250754c2e
SHA256 (often abbreviated with first 48 bits)

● contact webserver via URL http://localhost:8080 (web browser or curl)


$ curl http://localhost:8080
<h1> Message from container! </h1>

● show running containers


$ docker ps
CONTAINER ID IMAGE COMMAND .... PORTS
c8eda1f3a673 atcomp/apachetest "/usr/sbin/apache2ctl" 0.0.0.0:8080->80/tcp

● terminate running container


$ docker stop c8eda1f3a673
c8eda1f3a673

v1a – h01 – 10
Docker summary – run container
Run Docker image
● add new filesystem layer on top of image
◗ read-write
◗ stores all modifications during execution

● starts initial application in container


on target infrastructure

container
● architecture aims at stateless applications
◗ no need to preserve data between sessions application
◗ example: static website with Apache
container layer
read-write
● container
docker run custom image
◗ preferably runs one application read-only
◗ might run small script (minutes) or
server task (months)
v1a – h01 – 11
Docker summary – registries
Docker registry: image store
● server(s) containing images registry
registry
◗ multiple versions

● possibilities
◗ Docker Hub (100,000+) docker
docker push
push docker
docker pull
pull
◗ in-company registry atcomp/apachetest
atcomp/apachetest atcomp/apachetest
atcomp/apachetest

● images can be
◗ pushed
stored with docker push
custom
custom custom
custom
◗ pulled image image
image image
explicitly with docker pull
implicitly on initial use with docker run
v1a – h01 – 12
implicitly on initial use with FROM in Dockerfile
Workshop “Kubernetes”

Introduction

v1a – h01 – 13
What is Kubernetes?
Kubernetes (‘helmsman’) a.k.a. K8s
● combines various hosts into cluster
◗ scalability
◗ reliability (failover)

● orchestrates containers
◗ activate
◗ monitor
◗ terminate

● can manage various container implementations,


like Docker, CRI-O, Rocket, ....

v1a – h01 – 14
Kubernetes orchestration
Kubernetes orchestration
● introduced in 2014, inspired by Google's Borg
host host host
A B C
● maintained by
Cloud Native Computing Foundation (CNCF)

● open source

● concept
◗ cluster needs at least one head node,
formerly called master node head
◗ other hosts in cluster are worker nodes host host host
only run container instances A B C
formerly called minions

cluster
v1a – h01 – 15
User interface
User interfaces
● command line interface: kubectl subcommand object [options]
◗ subcommands
create, modify and delete objects: create, apply, delete
query current state of objects: get, describe
other actions, like: scale, logs, exec, .....

◗ overview of subcommands: kubectl


more info about subcommand: kubectl subcommand --help

● graphical user interface


◗ also valid for cloud implementations, like
Google Kubernetes Engine (GKE)
Amazon Elastic Kubernetes Service (EKS)
Azure Kubernetes Service (AKS)

v1a – h01 – 16
Kubernetes architecture
control plane head worker node

kubelet e
controller tim
scheduler ru
n
manager i ner
nta
kube-proxy co

API server worker node

kubelet
RESTful
API
etcd
kube-proxy
kubectl
kubectl
(client)
(client)
● API server: controls entire cluster ● kubelet:
● scheduler: schedules containers (via pods) on nodes ◗ pod startup & monitoring
● controller manager: controls required number of replicas ◗ resource management
● etcd: distributed key-value store to maintain ● kube-proxy: container exposure to
v1a – h01 – 17 current cluster state network & load balancing
Pods
Pod
● conceptually smallest unit, running one application

● consists of one or more closely related containers


◗ typically one container running primary application
◗ additional containers supporting primary application (if needed)

● containers in same pod


◗ run on same worker node

◗ share storage volumes


allows persistent data 10.0.0.3 10.0.0.4 10.0.0.5
allows containers to access same storage host host host
A B C
◗ share network
same IP address on pod network, same ports pod network
allows containers to communicate via localhost
v1a – h01 – 18
Pods – startup, status and termination
Pod lifetime
● activate pod specifying command line parameters
$ kubectl run apatest –-restart=Never --image=atcomp/apachetest –-port=80
pod/apatest created

or
● activate pod via manifest file apa.yml
apiVersion: v1
$ kubectl create -f apa.yml kind: Pod
pod/apatest created metadata:
name: apatest
spec:
● verify pod status and delete pod containers:
- name: apacont
$ kubectl get pods image: atcomp/apachetest
NAME READY STATUS RESTARTS AGE ports:
apatest 1/1 Running 0 82s - containerPort: 80
restartPolicy: Never
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE ....
apatest 1/1 Running 0 83s 10.244.1.254 hostb

$ kubectl delete pod/apatest


pod "apatest" deleted
v1a – h01 – 19
Pod completion
After container termination
● pod is ‘completed’ but not removed
$ kubectl run testpod –-restart=Never --image=ubuntu –- sleep 10
pod/testpod created

$ kubectl get pods


NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 6s

$ kubectl get pods after a while...


NAME READY STATUS RESTARTS AGE
testpod 0/1 Completed 0 17s

◗ logs can still be watched


◗ events can still be verified

● pods are not ‘self-healing’


◗ controller needed to restart completed pod (covered later)

● interactive pod (-it) with implicit removal (--rm)


$ kubectl run testpod –-restart=Never -it --rm --image=ubuntu –- bash
v1d – h11 – 20
Kubernetes objects (1)
controllers hpa
hpa
Kubernetes concept
● numerous object types cronjob
cronjob deployment
deployment

● every object has daemonset


daemonset job
job replicaset
replicaset
◗ type (‘kind’) namespace
◗ unique name networking
node
node od service
service ingress
ingress
P
● object refers to other
objects by using
◗ labels configmap
configmap secret
secret persistentvolumeclaim
persistentvolumeclaim
(most references to pods) storage
or persistentvolume
persistentvolume storageclass
storageclass
◗ object names
(most references from pods)

role
role

rolebinding
rolebinding
v1a – h01 – 21
Kubernetes objects (2)
Kubernetes object
● has desired state and actual state (current state)
◗ control plane continuously tries to match actual state
to desired state

◗ desired state defined by spec in manifest file


apa.yml
● manifest consists of apiVersion: v1
kind: Pod
apiVersion: version of API metadata:
name: apatest
kind: type of object labels:
app: webserver
spec:
metadata: object name (must be unique) containers:
optional labels for selection - name: apacont
image: atcomp/apachetest
spec: specification of desired state, ports:
depending on type - containerPort: 80
restartPolicy: Never

v1a – h01 – 22
Kubernetes objects (3)
Kubernetes object
● can be created by using
◗ command line arguments
or
◗ manifest file
kubectl create -f manifestfile (create)
kubectl apply -f manifestfile (create/change)

● can be deleted by using


◗ command line arguments
kubectl delete objecttype/objectname
or
◗ manifest file
kubectl delete -f manifestfile

v1a – h01 – 23
Labels
Object labels
● every object has unique name
● additionally, labels can be assigned apa.yml
to be used for apiVersion: v1
kind: Pod
◗ selection on command line with -l flag metadata:
# kubectl get pods -l app=webserver name: apatest
labels:
app: webserver
# kubectl delete all -l app=webserver spec:
....
◗ refer from one object to another object
apa-svc.yml
e.g. to assign Service object to Pod object apiVersion: v1
kind: Service
metadata:
● view object labels name: webservice
spec:
# kubectl get pods --show-labels ports:
NAME READY STATUS .... LABELS - port: 80
apatest 1/1 Running app=webserver selector:
app: webserver

v1d – h11 – 24
Namespaces
Namespaces: subdivide cluster into various virtual clusters
● per project, per application, per developer team, per department, per ....
$ kubectl create ns devel
namespace/devel created

$ kubectl get ns
NAME STATUS AGE
default Active 278d
devel Active 14s
....

$ kubectl create -f apa.yaml -n devel in namespace devel

$ kubectl get pods -n devel in namespace devel


NAME READY STATUS RESTARTS AGE
apatest 1/1 Running 0 48s

● allow
◗ certain actions on certain objects only – defined per user
◗ separate scope for object names
◗ limitation on resource utilization (cpu, memory, storage, number of pods, ...)
v1a – h01 – 25
Workshop “Kubernetes”

Controllers

v1d – h11 – 26
Controllers – an introduction
Controllers controller
scheduler
manager
● pod ('naked pod', 'bare pod') is not self-healing
API server

● pods usually started under supervision of controller RESTful


API
etcd
◗ create, manage, monitor and restart pods
kubectl
kubectl
◗ provide rolling updates (client)
(client)

apiVersion: apps/v1
● part of controller manager in control plane kind: somecontroller
metadata:
name: sleeper
....
● manifest file of controller requires spec:
◗ controller-specific definitions controller-specific stuff...
template:
◗ template of pod (to be controlled) metadata:
name: snorepod
spec:
containers:
- name: snorecont
image: ubuntu
command: ["sleep", "60"]

v1a – h01 – 27
Controller types
Controller types
● ReplicaSet (rs) – preferably combine with Deployment
◗ pod replicas and restart of failing pods

● Deployment – recommended
◗ pod replicas by using ReplicaSet
◗ rolling updates and rollbacks

v1a – h01 – 28
Deployment – create
Deployment object apa-deploy.yml
apiVersion: apps/v1
● definition implies ReplicaSet and Pod kind: Deployment
metadata:
● create deployment name: apadep
labels:
$ kubectl create -f apa-deploy.yml app: webserver
deployment.apps/apadep created spec:
replicas: 3
$ kubectl get deployment selector:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE ...
apadep 3 3 3 3 matchLabels:
app: apapod
$ kubectl get rs template:
NAME DESIRED CURRENT READY AGE metadata:
apadep-7b6fc56c77 3 3 3 18s labels:
app: apapod
$ kubectl get pods spec:
NAME READY STATUS RESTARTS ... containers:
apadep-7b6fc56c77-6ldcv 1/1 Running 0
apadep-7b6fc56c77-bqxhr 1/1 Running 0
- name: apacont
apadep-7b6fc56c77-hk7qp 1/1 Running 0 Deployment
Deployment
image: atcomp/apachetest:1.4
image:
image: ubuntu:16.10
ubuntu:16.10 ports:
- containerPort: 80
ReplicaSet
ReplicaSet
replicas:
replicas: 33
image:
image: ubuntu:16.10
ubuntu:16.10

v1a – h01 – 29
Deployment – rolling update
Rolling update
● executed when pod specification changes, like new image
$ kubectl set image deployment/apadep apacont=atcomp/apachetest:1.5
deployment.apps/apadep image updated

$ kubectl get all -o wide


NAME READY UP-TO-DATE AVAILABLE CONTAINERS IMAGES ...
deployment.apps/apadep 3/3 3 3 apacont atcomp/apachetest:1.5

NAME DESIRED CURRENT READY CONTAINERS IMAGES ...


replicaset/apadep-5b4f756b5c 3 3 3 apacont atcomp/apachetest:1.5
replicaset/apadep-74f4dc967b 0 0 0 apacont atcomp/apachetest:1.4

NAME READY STATUS IP ...


pod/apadep-5b4f756b5c-5kvrx 1/1 Running 10.244.2.213
pod/apadep-5b4f756b5c-rflpn 1/1 Running 10.244.2.212
pod/apadep-5b4f756b5c-z5zbj 1/1 Running 10.244.1.89 Deployment
Deployment
image:
image: apachetest:1.4
apachetest:1.4 apachetest:1.5
● creates new replicaset within deployment
◗ original replicaset preserved
◗ pod replicas replaced one-by-one ReplicaSet
ReplicaSet ReplicaSet
ReplicaSet
replicas:
replicas: 00 replicas:
replicas: 33
image:
image: apachetest:1.4
apachetest:1.4 image:
image: apachetest:1.5
apachetest:1.5

v1a – h01 – 30
Deployment – scaling
Deployment – scaling
● modify number of replicas
# kubectl scale --replicas=2 deployment/apadep
deployment.extensions/apadep scaled

# kubectl get all -o wide


NAME READY STATUS RESTARTS AGE IP NODE
pod/apadep-66fcc9665b-7sjvr 1/1 Running 3 4m20s 10.244.2.55 hostc
pod/apadep-66fcc9665b-pj6sf 1/1 Running 3 4m19s 10.244.1.60 hostb

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE CONTAINERS IMAGES


deployment.apps/apadep 2 2 2 2 apacont atcomp/apachetest:1.5

NAME DESIRED CURRENT READY CONTAINERS IMAGES


replicaset.apps/apadep-66fcc9665b 2 2 2 apacont atcomp/apachetest:1.5
replicaset.apps/apadep-7b6fc56c77 0 0 0 apacont atcomp/apachetest:1.4

◗ replicas can even be scaled to 0 Deployment


Deployment
(temporarily no pods) image:
image: apachetest:1.5
apachetest:1.5

◗ autoscaling possible with HPA controller ReplicaSet


ReplicaSet
replicas:
replicas: 22
minimum and maximum number of replicas image:
image: apachetest:1.5
apachetest:1.5
target CPU time utilization per replica
v1a – h01 – 31
Workshop “Kubernetes”

Storage

v1d – h11 – 32
Kubernetes Volumes
Storage
● stateless pod – preferred!
◗ no need to preserve data
◗ container storage (filesystem) on local disk host host host
A B C
● stateful pod local local

◗ requires persistent volume


◗ volume can be mapped on
network-based storage
local disk
– restricted use
network storage cloud-based storage
– pod terminated on hostB might be restarted on hostC
and/or
– pods running on different hosts might share same storage

v1a – h01 – 33
Volumes
apiVersion: v1
Volumes kind: Pod
metadata:
● have specific type name: ....
spec:
containers:
- name: ....
● have explicit lifetime image: ....
volumeMounts:
(life span of pod, permanent, ...) - name: myvol
mountPath: /mnt
volumes:
● in pod manifest - name: myvol
volumetype
spec.volumes:
provided volumes of certain type

spec.containers.volumeMounts:
mount point in container

v1a – h01 – 34
Volumes – shared between containers in pod
pod
apadyn-pod.yml
apiVersion: v1
Shared volume per pod kind: Pod
metadata:
● pod may consist of various containers name: apadyn
labels: volume
● every container uses its private filesystem app: webserver
– when container in pod crashes: data loss spec:
initContainers:
– containers do not share private filesystem - name: apaprep
image: alpine
command:
solution: emptyDir volume - sh
- -c
◗ volume within pod for all its containers
- echo Apache started at $(date) >
/mnt/index.html
◗ initially empty volumeMounts:
- name: sharedhtml
◗ life span of volume is life span of pod mountPath: /mnt
containers:
◗ example: - name: apacont
image: atcomp/apachetest
$ kubectl create -f apadyn-pod.yml ports:
- containerPort: 80
$ kubectl get pod/apadyn -o wide volumeMounts:
NAME READY STATUS ... IP NODE - name: sharedhtml
apadyn 1/1 Running 10.244.2.239 hostc mountPath: /var/www/html
volumes:
$ curl 10.244.2.239 - name: sharedhtml
Apache started at Wed Aug 14 09:45:30 UTC ... emptyDir: {}

v1a – h01 – 35
Volumes – shared between pods on same node
hostpath-pod.yml
Persistent shared volume on node apiVersion: v1
kind: Pod
● persistent storage in filesystem of host metadata:
name: apahost
labels:
◗ notice: only host on which pod is created! app: webserver
spec:
◗ read/write access nodeSelector:
kubernetes.io/hostname: hostc
solution: hostPath volume containers:
- name: apatest
image: atcomp/apachetest
● example: run pod on specific host ports:
- containerPort: 80
$ kubectl get nodes --show-labels volumeMounts:
NAME STATUS ROLES ... LABELS - name: hosthtml
hosta Ready master kubernetes.io/hostname=hosta mountPath: /var/www/html
hostb Ready <none> kubernetes.io/hostname=hostb volumes:
hostc Ready <none> kubernetes.io/hostname=hostc - name: hosthtml
hostPath:
path: /var/www/html
$ kubectl create -f hostpath-pod.yml
type: Directory
$ kubectl get pod/apahost -o wide host
NAME READY STATUS RESTARTS AGE IP NODE
apahost 1/1 Running 0 19m 10.244.2.240 hostc C

$ curl 10.244.2.240 local


<h1> Welcome to hostc! </h1>

v1a – h01 – 36
Volumes – shared between nodes
cloud-pod.yml
External storage apiVersion: v1
kind: Pod
● can be metadata:
name: datapod
◗ network-based (e.g. NFS) spec:
containers:
◗ cloud-based (e.g. Google, Amazon or Azure) - image: alpine
name: mycont
volumeMounts:
- name: allmine
● example: run pod on GCE Persistent Disk mountPath: /mydisk
volumes:
◗ create Persistent Disk - name: allmine
gcePersistentDisk:
# gcloud compute disks create \ pdName: my-pd
--size=100GB --zone=europe-west4-a my-pd fsType: ext4

◗ create pod mounting PD


$ kubectl create -f cloud-pod.yml GCE
VM

● concept of Persistent Volumes is preferred!

v1a – h01 – 37
Persistent Volumes – static provisioning (1)
pod
Persistent volumes – static allocation
● managed by
◗ PersistentVolume (PV) volume
piece of storage provisioned by administrator
volume
example types: NFS, iSCSI bind
PersistentVolumeClaim

◗ PersistentVolumeClaim (PVC)
request for storage by user
to be mounted in pod
specific properties can be defined, like
size, access mode, performance, ....
PersistentVolume

● binding of PVC to PV
◗ 1-to-1 mapping NFS cloud
iSCSI storage
◗ PVC state 'Pending' if no suitable PV available
v1a – h01 – 38
Persistent Volumes – static provisioning (2)
pub-pv.yml
Example persistent volume apiVersion: v1
kind: PersistentVolume
● create PV of 1GiB based on NFS metadata:
name: pub-pv
● request PVC of 500MiB spec:
storageClassName: nfspool
accessModes:
- ReadWriteMany
$ kubectl create -f pub-pv.yml capacity:
persistentvolume/pub-pv created storage: 1Gi
nfs:
$ kubectl get pv server: nasi
NAME CAPACITY ACCESS ... STATUS CLAIM path: /nfs/Public
pub-pv 1Gi RWX Available readOnly: false

pub-pvc.yml
$ kubectl create -f pub-pvc.yml apiVersion: v1
persistentvolumeclaim/pub-pvc created kind: PersistentVolumeClaim
metadata:
name: pub-pvc
$ kubectl get pvc spec:
NAME STATUS VOLUME CAPACITY ACCESS... storageClassName: nfspool
pub-pvc Bound pub-pv 1Gi RWX accessModes:
- ReadWriteMany
resources:
$ kubectl get pv requests:
NAME CAPACITY ACCESS... STATUS CLAIM storage: 500Mi
pub-pv 1Gi RWX Bound default/pub-pvc

v1a – h01 – 39
Persistent Volumes – static provisioning (3)
Example persistent volume – cont'd apiVersion: apps/v1
pub-deploy.yml
kind: Deployment
● create deployment with pod using PVC metadata:
name: pubdep
as volume labels:
app: pubdep
spec:
● pod pending as long as PVC pending replicas: 1
selector:
matchLabels:
app: pubpod
template:
metadata:
$ kubectl create -f pub-deploy.yml labels:
deployment/pubdep created app: pubpod
spec:
containers:
$ kubectl get pod - name: sleepcont
NAME READY STATUS ... image: ubuntu
pubdep-5874f6fbd6-qvndg 1/1 Running command: ["sleep", "3600"]
volumeMounts:
$ kubectl exec -it pubdep-..-qvndg bash - name: pubstore
root@pubdep-5874f6fbd6-qvndg:/# ls -l /public mountPath: /public
.... volumes:
drwxrwxrwx+ ... 4096 Jun 22 2018 Documents - name: pubstore
drwxrwxrwx+ ... 4096 Feb 11 2016 Music persistentVolumeClaim:
drwxrwxrwx+ ... 4096 Dec 17 2016 Photos claimName: pub-pvc

v1a – h01 – 40
Persistent Volumes – dynamic provisioning
pod
Persistent volumes – dynamic provisioning
● managed by StorageClass (SC)
◗ dynamically allocates storage
volume
when claim (PVC) issued
volume
bind
◗ uses provisioner (volume plugin) for allocation PersistentVolumeClaim

builtin provisioners (kubernetes.io):


gce-pd (GCEPersistentDisk)
aws-ebs (AWSElasticBlockStore) PV
azure-disk (AzureDisk)
azure-file (AzureFile)
glusterfs (Glusterfs) StorageClass
and many others....

◗ dynamically creates PV object

◗ volume initially empty


v1a – h01 – 41
Workshop “Kubernetes”

Networking

v1d – h11 – 42
Kubernetes networking
Communication possibilities
● container-to-container in same pod
◗ via localhost (loopback interface) pod
◗ port numbers may conflict
network

between applications

NAT
● pod-to-external
◗ via Network Address Translation (NAT)

● external-to-pod
◗ what is IP address of destination pod?

● pod-to-pod
◗ what is IP address of destination pod?

v1a – h01 – 43
Services – IP address of pod
Example: access via dynamic IP address
● create deployment with 2 pod replicas apa-deploy.yml
$ kubectl create -f apa-deploy.yml apiVersion: apps/v1
deployment.apps/webserver created kind: Deployment
metadata:
$ kubectl get pods –o wide name: webserver
NAME READY ... IP NODE
webserver-8654d77c94-928tw 1/1 10.244.1.130 hostb
labels:
webserver-8654d77c94-ggv8c 1/1 10.244.2.64 hostc app: apa
spec:
$ curl 10.244.1.130 replicas: 2
<H1> Welcome from container! </H1> template:
metadata:
labels:
● disadvantages app: apa
spec:
◗ no load balancing containers:
- name: apa
◗ when pod terminates, after restart it probably image: atcomp/apachetest
gets other IP address ports:
- containerPort: 80
$ kubectl delete pod/webserver-....-928tw
$ kubectl get pods –o wide
NAME READY IP NODE
webserver-8654d77c94-7br4b 1/1 10.244.1.131 hostb
webserver-8654d77c94-ggv8c 1/1 10.244.2.64 hostc
v1a – h01 – 44
Services – internal access
Service object
● gets static virtual IP (VIP) address,
though dynamically assigned od o d
p p
◗ stable IP address to reach
mortal pods 10.244.2.64 10.244.1.130 10.244.1.131
targetport
◗ load balancing service
10.97.15.1 selector
● contains selector
◗ refers to label of pod to attach to o d port

endpoint object created per target port yp type



m ClusterIP

● accessibility determined by type


ClusterIP: routable within cluster (default)
for internal access

NodePort: static port on every node in cluster


for external access
v1a – h01 – 45
Services – internal: setup ClusterIP
apa-deploy.yml
Example: add internal service for webserver apiVersion: apps/v1
kind: Deployment
● create service referring to label app=apa metadata:
name: webserver
labels:
● type ClusterIP to provide internal access app: apa
spec:
$ kubectl create -f apa-svc.yml replicas: 2
service/webserv created selector:
....
template:
$ kubectl get svc metadata:
NAME TYPE CLUSTER-IP .... PORT(S) .... labels:
webserv ClusterIP 10.97.15.1 80/TCP app: apa
spec:
$ kubectl get ep ....
NAME ENDPOINTS AGE
webserv 10.244.1.131:80,10.244.2.64:80 67m apa-svc.yml
apiVersion: v1
$ kubectl get all -l app=apa kind: Service
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE metadata:
deployment.apps/webserver 2 2 2 2 10m name: webserv
labels:
NAME DESIRED CURRENT READY AGE app: apa
replicaset.apps/webserver-8654d77c94 2 2 2 10m spec:
type: ClusterIP
NAME READY STATUS RESTARTS AGE ports:
pod/webserver-8654d77c94-7br4b 1/1 Running 0 10m - port: 80
pod/webserver-8654d77c94-ggv8c 1/1 Running 0 10m targetPort: 80
protocol: TCP
selector:
v1a – h01 – 46
app: apa
Services – internal: discovery
Service discovery by other pods
● via internal DNS

● maintains record for every service:


service[.ns.svc.cluster.local]

● preferred (always available)!


$ kubectl exec -it mypod bash
root@mypod:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local

root@mypod:/# host webserv


webserv.default.svc.cluster.local has address 10.97.15.1

root@mypod:/# curl webserv


<H1> Welcome from container! </H1>

v1a – h01 – 47
Services – external access
External access to pod
● requires service type Nodeport
(implies service type ClusterIP) o d od
p p
10.244.2.64 10.244.1.131
● port range 30000-32767
targetport
specify with keyword nodePort
dynamically assigned when
service
10.97.15.1 selector
keyword nodePort omitted
port

od
p type
ClusterIP

type
NodePort

nodeport

v1a – h01 – 48 external


Services – external: setup NodePort
apa-deploy.yml
Example: add external service for webserver apiVersion: apps/v1
kind: Deployment
metadata:
● create service name: webserver
labels:
$ kubectl create -f apa-svcn.yml app: apa
deployment.apps/webserver created spec:
replicas: 2
$ kubectl get svc selector:
NAME TYPE CLUSTER-IP .... PORT(S) .... ....
webserv NodePort 10.97.15.1 80:32123/TCP template:
metadata:
labels:
$ netstat -tl app: apa
Proto Recv-Q Send-Q Local Address Foreign Address State spec:
tcp6 0 0 [::]:32123 [::]:* LISTEN containers:
....

apa-svcn.yml
● access from outside cluster apiVersion: v1
kind: Service
metadata:
anyhost$ curl hosta:32123 or hostb or hostc name: webserv
<H1> Welcome from container! </H1> labels:
app: apa
● access from inside cluster (similar to ClusterIP) spec:
type: NodePort
$ kubectl run mypod -it --restart=Never ports:
- port: 80
--image=alpine -- bash nodePort: 32123
root@mypod:/# curl webserv protocol: TCP
<H1> Welcome from container! </H1> selector:
v1a – h01 – 49
app: apa
Workshop “Kubernetes”

Workshop is extraction from the course


“Kubernetes Fundamentals”
(three days)

Questions?

v1a – h01 – 50

You might also like