Professional Documents
Culture Documents
Linux processes
Image
Config
file
Dash Board
kubectl K8s Master
scheduler Controllers DB
etcd
(etcd)
API Server
Container
Registry
Image
Config
file
Dash Board
kubectl K8s Master
scheduler Controllers DB
(etcd)
API Server
Container
Registry
10.1.3.0/24
10.1.3.0/24
Dashboard Dashboard
Pod
Pod Pod Pod
selector: frontend selector:
type = FE version = v2
type = FE type = FE version = v2
version = v2
Behavior Benefits
● Metadata with semantic meaning ● Allow for intent of many users (e.g. dashboards)
● Membership identifier ● Build higher level systems …
● The only Grouping Mechanism ● Queryable by Selectors
presentation to: Kubermatic
Label Expressions
Dashboard
Pod Pod
Pod Pod Pod
selector: frontend
env = notin(prod)
env = qa env = test env = prod env = prod
Expressions
● env = prod ● env in (test,qa)
ReplicaSet
Replication
Pod Pod
Controller Pod Pod Pod
#pods = 3
app = demo frontend frontend
color in (blue,grey)
app = demo app = demo app = demo
color = blue color = blue color = grey
Behavior Benefits
● Keeps Pods running Recreates Pods, maintains desired state
ReplicaSet
Replication
Pod Pod
Controller Pod Pod Pod
#pods = 3
app = demo frontend frontend
color in (blue,grey)
app = demo app = demo app = demo
color = blue color = blue color = grey
Replication
ReplicaSet
Controller Pod
Pod
Pod
Pod
Pod Pod
version = v1
type = FE
#pods = 4 frontend frontend
version= v1 version = v1 version = v1 version = v1
type = FE type = FE type = FE type = FE
Service
name = frontend
Service
Label selector:
Label se
type = BE
Replication
ReplicaSet ReplicaSet
Replication
Controller Pod
Pod
Pod
Pod
Pod
version = v1 Controller
version = v2
type = BE type = BE
#pods = 2
frontend frontend #pods = 1
version= v1 version = v1 version = v2
type = BE type = BE type = BE
Service
Service
name = backend
Label selector:
Label
type = BE
selectors:
version =
presentation to: Kubermatic
1.0
Reliable Deployments
ReplicaSet ReplicaSet
Create frontend-1234567
frontend-7654321 frontend-1234567
Scale frontend-1234567 up to 1
Create frontend-1234567 version:=v1v1
version version:=v2v2
version
Scale frontend-7654321 down to 0 type =BEBE
type: type =BEBE
type:
Scale frontend-1234567
Create frontend-1234567
up to 1 #pods = 02 #pods = 210
Scale
Scale
Scalefrontend-1234567
frontend-7654321
frontend-1234567 upto
to01 2
Create frontend-1234567
down
up to
show: version = v2
Deployment
Deployment
Pod Pod
Pod Pod
version: v1 frontend
version:= v2
version v1 version: v2
API
type = BE type = BE type = BE
kubectl edit
deployment ... Serv
be-svc
Replication
ReplicaSet
ReplicaSet
Controller
name=locust Pod Pod
Pod Pod
Pod Pod
role=worker
name=locust
frontend frontend
role=worker
#pods
#pods = 4 = 2
1 name=locust name=locust name=locust name=locust
role=worker role=worker role=worker role=worker
40% CPU
70% CPU
Scale
Heapster
CPU Target% = 50
>< 50% CPU
...
presentation to: Kubermatic
Graceful Termination
Goal: Give pods time to clean up
• finish in-flight operations
• log state
• flush to disk
• 30 seconds by default
Zone C
presentation to: Kubermatic
Scheduling
Image
Config
file
Dash Board
kubectl K8s Master
API Server
Container
Registry
Dashboard
apiVersion: v1 K8s Master
kind: Pod
metadata: Controllers etcd
name: poddy
spec:
nodeName: k8s-minion-xyz
containers: API Server
- name: nginx
image: nginx
ports:
- containerPort: 80
poddy
Dashboard
K8s Master
apiVersion: v1
kind: Pod
metadata: scheduler Controllers etcd
name: poddy
spec:
containers:
- name: nginx API Server
image: nginx
ports:
- containerPort: 80
Request:
• how much of a resource you are asking to use, with a strong
guarantee of availability
• scheduler will not over-commit requests
Limit:
• max amount of a resource you can access
Conclusion:
• Usage > Request: resources might be available
• Usage > Limit: throttled or killed
my-controller.yaml
...
spec:
containers:
- name: locust
image: gcr.io/rabbit-skateboard/guestbook:gdg-rtv
resources:
requests:
memory: "300Mi"
cpu: "100m"
limits:
memory: "300Mi"
cpu: "100m"
● Attached Disks
2Core
● Specific Hardware 2GB
Disks Labels
CPU
Mem
K8s Node
per node
K8s Node
K8s Node
disktype = ssd
http://kubernetes.github.io/docs/user-guide/node-selection/
presentation to: Kubermatic
Pod Scheduling: Ranking Potential Nodes
Prefer node with most free resource
left after the pod is deployed
Node1
Admission Control
AC is performed after AuthN (AuthenticatioN) checks API
Server
● A Request denial
LimitRanger
Observes the incoming request and ensures that it does not violate any of the constraints enumerated in the
LimitRange object in a Namespace
ServiceAccount
Implements automation for serviceAccounts (RBAC)
ResourceQuota
Observes the incoming request and ensures that it does not violate any of the constraints enumerated in the
ResourceQuota object in a Namespace.
Database
• EmptyDir
• HostPath Pod
• nfs (and similar services)
• Cloud Provider Block Storage
Database
http://vitess.io/
presentation to: Kubermatic
ConfigMaps
App Pod
24.1.2.3
24.4.5.6
Service Service
name: foo name: bar
api.company.com
24.7.8.9
http://api.company.com/foo http://api.company.com/bar
10.0.0.1
10.0.0.3
Service Service
name: foo name: bar
Service
name: bar
presentation to: Kubermatic
Daemonsets
Ensure that a replica of a given pod runs
on every node or a subset nodes
Examples:
Uses CNI
• Simple exec interface
• Not using Docker libnetwork
• but can defer to Docker for networking
net Plugin
Can do
• DNS(SEC)
• Caching
• Health Checks
Or plug in your own!
https://kubernetes.io
Code: github.com/kubernetes/kubernetes
Chat: slack.k8s.io
Twitter: @kubernetesio
Replication
Replication Replication
Controller
Controller Pod
Pod
Pod
Pod
Pod Controller
frontend frontend
#pods = 2 #pods = 1
version= v1 version = v1 version = v2
version = v1 version = v2
Behavior Benefits
● Keeps Pods running Recreates Pods, maintains desired state
● Gives direct control of Pods Fine-grained control for scaling