Professional Documents
Culture Documents
By
Nirmallya Mukherjee
Classical Scaling model
Previous generation of application architectures required
larger and larger servers to handle capacity needs.
$$
2
▶
New Scaling model = Elasticity
Load
Balancer
Tier 1
Web Servers
Tier 2
App Servers
Growth
Servers
Database / Datastore
Servers
3
▶
Cost economics - Classical model
Utilization / Capacity
Pre-planned capacity
D
A Utilization
Time / traffic
4
What's happening in areas A, B, C & D? What about OPEX?
▶
Cost economics - Cloud model
Utilization / Capacity
Capacity
Utilization
Time / traffic
6
▶
Container vs Virtual machines
7
▶
Faceoff - Container VM vs Metal VM
9
▶
Docker - The concepts
Benefits
• Seperation of concerns
– Developers build
– Admins focus on deployment
• Rapid deployment cycles
• Protability & predictability
– Build once and deploy anywhere
– No need to follow pages of application installation documents
– No more "Hey, it works on my machine, not sure about yours!"
• Scale
– Spin many containers proportional to application load demand
• Run more applications on the host effectively using its
resources
11
▶
Containers
12
▶
Docker workflow & ecosystem
13
▶
Docker registry (and Images)
• Hosts public official repositories + other 3rd parties
• "Distribution" component of Docker
• A repo is also called called as a collection of images
– A Tomcat repo will have multiple images perhaps for different versions of Tomcat
• 3rd party private repo can be created as well, download link will then have
the URL to your repo (just like github)
• Any public 3rd party repo is prepended by the user name
<3rdparty>/<repo name>
• $ docker search <image name>
• https://hub.docker.com/explore 14
▶
Docker engine
• AKA docker daemon
• Core component and has 2 parts
– Client (what we interact with)
– Server (the actual daemon)
• Client and server can be on different machines
• Instead of using the command line client we can consider
using kitematic which has a UI (only on mac & windows)
• Runs as a process on the Host OS
15
▶
Docker container
• An image is like a "class"
• Container is like an "object"
• More than 1 object can be
instantiated from a given class
• Similarly multiple containers
can be created from a given
image
• A container
– has all it takes to run an
application
– can start/stop/delete
– is isolated from one another
– is the "Run" component of
Docker
16
▶
Dockering apps and Images
Dockerfile -an introduction
• FROM <base image perhaps on docker hub>
• MAINTAINER <name email etc>
• RUN <run any command like apt-get update/install>
– Each run command creates 1 layer
– Can "chain" commands to include all in 1 layer, eg below
RUN \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --force-yes oracle-java7-installer oracle-java7-set-default && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
• ENV <var name> <value> (eg INSTALL_PATH /online_app the path is from the root of the
container)
• WORKDIR <path eg INSTALL_PATH> (every command executed will be in the context of the install
path)
• COPY <source on the machine in the context of workdir> <target in the docker image> (each
copy, run should be a seperate command because it builds the layers using aufs)
• VOLUME ["data"] <data in this directory will persist beyond the container life>
• EXPOSE <any port number like 8080>
• CMD <server -b 0.0.0.0:port> "main equivalent method"
18
▶
Image layers & optimizations
• Bundle up multiple RUN commands
• Notice all layers are written in different
containers, internally it is using commit
• Bunch up more than one command
under run - best practice
• Best practices + dockerfile command
reference
– https://docs.docker.com/engine/userguide/en
g-image/dockerfile_best-practices
• To check what is the actual and virtual
size of the image
– $ docker ps -a -s
19
▶
Dockerhub and CI
20
▶
Operations & Orchestration
Orchestration
• These components play an important
part
– Compose
– Machine
– Swarm
• Managing containers
– A few can be managed manually
– What if we have 100s or 1000s
– Google uses containers for everything!
22
▶
Docker compose
23
▶
Docker machine
• https://www.docker.com/products/docker-
machine
– To get started with Docker, first you need to
setup a Docker Engine
– But it could be a problem if you need to
manually setup on a large number of
machines
– Docker Machine automatically sets up Docker
on your computer, on cloud providers, and
inside your data center.
– Docker Machine provisions the hosts, installs
Docker Engine on them, and then configures
the Docker client to talk to the Docker
Engines
24
▶
Managing containers
• It is easy to manage a handful of containers but in a
data center there can be 1000s of containers!
– Swarm
– Kubernetes
• Swarm
– Clusters many Engines
– Turns individual hosts into a single virtual host (it looks
like it)
– Schedules containers
– Decides which host in the cluster the container will run
in
• https://docs.docker.com/swarm/overview/
25
▶
Managing containers
• "Kubernetes"
– Kubernetes (the Greek term for "helmsman" of a
ship) is a lighter-weight version of Google's own
internal technology it calls Omega
– It is an open source orchestration system for
Docker containers
– It handles scheduling onto nodes in a compute
cluster and actively manages workloads to ensure
that their state matches the users declared
intentions
– Using the concepts of "labels" and "pods", it
groups the containers which make up an
application into logical units for easy
management and discovery
26
▶
Setting up Machine
• It provisions Docker hosts and installs the Docker Engine on them
• Hosts can be created anywhere
– Locally (perhaps via VirtualBox)
– On the cloud (AWS, Google etc)
• Does two things
– Installs Docker by obtaining a host
– Configures the client
• Then we can use Machine to manage all these hosts across different environments (locally,
on the cloud etc) very easily
• The complete host/container management will be done by just interacting with Machine and
not going to each environment and doing it manually
27
▶
Swarm!
• Turns a group of Docker nodes into a single virtual host
• Instead of interacting with each host just work with "Swarm Manager"
• The manager can be on any node - local or remote
• The client instead of talking to the local daemon needs to point to the
Swarm Manager. Multiple clients can point to the same manager from
different hosts
• Swarm manager will be connected to the docker daemons via the
discovery method
– In the diagram below the "host" is the manager and the discovery backend
– An example is the "Hosted discovery" (there are others as well, see docs)
– This installs the agent on the hosts and registers the daemon running on it
– Monitors and send details to the discovery backend
28
▶
Docker is changing
• https://github.com/docker/docker/blo
b/master/CHANGELOG.md
• Some interesting capabilities
– Plugins, HealthCheck user defined
health check option, SHELL instruction
29
▶
Cloud providers
Docker datacenter
• Docker Datacenter delivers container
management and deployment services to the
enterprise via a Containers as a Service
solution
• Supported by Docker and hosted locally
behind the enterprise firewall
https://store.docker.com/bundles/docke
r-datacenter/purchase?plan=free-trial
31
▶
Cloud providers
• https://cloud.google.com/container-
registry/docs/#before_you_begin
• http://docs.aws.amazon.com/Amazon
ECS/latest/developerguide/Welcome.
html
• https://azure.microsoft.com/en-
in/services/container-service/
32
▶
About me
That's it!
www.linkedin.com/in/nirmallya
pointer.next@gmail.com
33