You are on page 1of 49

Container Engine

Hussein Galal
• Senior Software Engineer / Rancher Labs
• Remote worker
• @galal-hussein on github
Day 1
Content
● Why Containers?
● Containers Under the hood
○ Process Groups
○ Sessions
○ Cgroups
○ Namespaces
● Getting started with Docker
● LAB #1
Why Containers?
- Let’s Install Wordpress →
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-with-lamp-on-ubuntu-18-04

- Alternative:

$ docker run --name some-wordpress -p 8080:80 -d wordpress


Why Containers?
- Consistent Environment

- Run Anywhere

- Isolation

- Low overhead

- Small size on disk


Application Transformation
Containers under the hood
- Technologies used for containers:
- Cgroups

- Namespaces

- Union Filesystems

- Container Format
Process Group
signal

Collection PGID = PID of p1

of p1 p2

processes p3 p4

$ cat /dev/random | wc &


$ ps x -o "%c %p %r %y %x " | grep 'cat\|wc'
Session
ONE TERMINAL
A collection of process groups
established for job control
purposes
p1 p2 p1 p2
$ cat /dev/random | wc & cat /dev/random | sort &
$ps -j
PID PGID SID TTY TIME CMD
15726 15726 15726 pts/4 00:00:00 bash
p3 p4 p3 p4
18114 18114 15726 pts/4 00:00:00 cat
18115 18114 15726 pts/4 00:00:00 wc
18116 18116 15726 pts/4 00:00:00 cat
18117 18116 15726 pts/4 00:00:00 sort

setsid()
Cgroups
Control groups is a linux kernel feature that
limits or allocates the resources of the
controlling hosts (cpu, memory, disk I/O,
etc.) to the process groups

READ: Mairin post


Cgroups
● A cgroup is a collection of processes that are bound to a set of limits or parameters
defined via the cgroup filesystem.
● Cgroups are governed by subsystems
● Each subsystem is responsible of governing certain resource, for e.g
○ blkio
○ cpuset
○ devices
○ memory
● Cgroups are grouped into hierarchies
● Hierarchies can have one or more subsystems attached to it
Namespaces
Namespace isolation is the Linux kernel
feature, which isolate the operating system
view for each cgroup so that cgroup can’t
see the resources allocated to other
groups.

fork()

Vs
The isolation namespaces includes PID ,
IPC, mount, UTS, and network clone()
namespaces.
Namespaces

Demo..

UTC NS
OCI
● OCI is a industry standards specs for creating containers in the cloud:

○ Runtime-spec
○ Image-spec

● Tools have been developed to run this standard industry specs

● Each Container engine runs containers and develop images according to the OCI
standards
Container Engines
● A container engine is a piece of software that accepts user requests, including
command line options, pulls images, and from the end user’s perspective runs the
container

● Major Players:
○ Docker
○ LXD
○ RKT
○ CRI-O

● Going one layer deeper, most container engines don’t actually run the containers,
they rely on an OCI compliant runtime like runc
Enters Docker

Solomon Hykes
- Founder and CTO of Docker
Docker
● open source project to pack, ship and run any application as a
lightweight container

● Docker Engine is a client-server application with these major


components:
○ A server which is a type of long-running program called a daemon process (the dockerd command).
○ A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it
what to do.
○ A command line interface (CLI) client (the docker command).
Docker
Docker - Images
● An image is a read-only template with instructions for creating a
Docker container.

● You might create your own images or you might only use those created
by others and published in a registry.

● To build your own image, you create a Dockerfile with a simple syntax
for defining the steps needed to create the image and run it.
Docker - Images
Container Layer
Images

COW

/var/lib/docker
Docker - Images

Sharing Images
Docker - Containers

Hello World Container...

Docker client Load hello-world


Create and run
Image into a container
Docker container
Docker - Containers
● A container is a runnable instance of an image

● You can create, start, stop, move, or delete a container using the Docker
API or CLI

● You can connect a container to one or more networks, attach storage


to it, or even create a new image based on its current state
Docker - Containers

Hello World Container...

- Checked to see if you had the hello-world software image

- Downloaded the image from the Docker Hub

- Loaded the image into the container and “ran” it


$ docker run docker/whalesay cowsay ITI_SYSADMIN
Unable to find image 'docker/whalesay:latest' locally
latest: Pulling from docker/whalesay
e190868d63f8: Pull complete Image Layer 1
909cd34c6fd7: Pull complete Image Layer 2
0b9bfabab7c1: Pull complete
a3ed95caeb02: Pull complete Image Layer 3
00bf65475aba: Pull complete

….
c57b6bcc83e3: Pull complete
8978f6879e2f: Pull complete
8eed3712d2cf: Pull complete
Digest: sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b Image Layer 8
Status: Downloaded newer image for docker/whalesay:latest
______________
< ITI_SYSADMIN >
--------------
\
\
\
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
LAB #1
● Download and install docker on your machine
● Pull nginx:alpine image on your machine
● Explore “docker run” command and run nginx:alpine image on your machine in the
background and specify the name of the container to name-ITI-<NO>
● After creating the container make sure to get the path of the cgroups created for this
container
Day 2
Docker - Images

Build your first image

Dockerfile
Docker - Images

Run your first image


$ docker run myfirstimage
Dockerfile hello

$ docker run myfirstimage echo world


World

$ docker run -it myfirstimage bash


root@fd46782a3233:/#
Docker - Images

More Dockerfile commands...


FROM EXPOSE ENV ADD COPY

RUN ENTRYPOINT ARG VOLUME USER

CMD WORKDIR ONBUILD STOPSIGNAL HEALTHCHECK

LABEL SHELL
Docker - Images

Build More Complex Image


Docker - Images

Docker HUB
Docker Hub is a cloud-based registry service which allows you to link to code
repositories, build your images and test them, stores manually pushed
images, and links to Docker Cloud so you can deploy images to your hosts.
Docker - Images

Docker HUB
- Create Account
- Build your Image
- Push Image
Docker - Networking
● Docker’s networking subsystem is pluggable, using drivers. Several
drivers exist by default, and provide core networking functionality:

○ bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge
networks are usually used when your applications run in standalone containers that need to communicate.

○ host: For standalone containers, remove network isolation between the container and the Docker host, and use the
host’s networking directly.

○ overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate
with each other.

○ none: For this container, disable all networking. Usually used in conjunction with a custom network driver.
Docker - Networking
● User can define their own network

● Containers can be attached to any network after that

● You can define different subnets for these networks

$ docker network ls

$ docker run --network=

$ docker network inspect bridge

$ docker network create --driver bridge custom-net


Docker - Networking
● By default, when you create a container, it does not publish any of its ports to the
outside world.

● To make a port available to services outside of Docker, or to Docker containers which


are not connected to the container’s network, use the --publish or -p flag.

● This creates a firewall rule which maps a container port to a port on the Docker host.

-p 8080:80
-p 192.168.1.100:8080:80
-p 8080:80/udp
-p 8080:80/tcp -p 8080:80/udp
Docker - Volumes
● Docker has two options for containers to store files in the host machine: volumes and
bind mounts

● Volumes are stored in a part of the host filesystem which is managed by Docker
(/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this
part of the filesystem. Volumes are the best way to persist data in Docker.

● Bind mounts may be stored anywhere on the host system. They may even be
important system files or directories.

● tmpfs mounts are stored in the host system’s memory only, and are never written to
the host system’s filesystem.
Docker - Volumes
Docker - Volumes
$ docker volume create my-vol

$ docker volume ls

$ docker volume inspect my-vol

$ docker run -d --name devtest -v my-vol:/app nginx:latest


LAB #2
PART - 1

● Build Nginx image with the name “ITI-nginx-lab2”


● The Image should be based on “alpine:latest”
● Make sure that the image runs successfully on your machine
● Create a Docker hub account
● Push the image to your docker hub

PART - 2

● Create a new network and name it “iti-network”


● The new network should be a bridge driver and uses a subnet 10.0.0.0/8
● Run the image that you created in Part - 1, and the container should:
○ Have the name “nginx-alpine-iti”
○ Publish the port 80 from within the container to port 8080
○ The index page should have the text in <h1>Lab 2 ITI - (your name)</h1>
● You should use volumes for the index page
Day 3
Docker - Storage Drivers
● Storage drivers allow you to create data in the writable layer of your container.

● The files won’t be persisted after the container is deleted, and both read and write speeds
are lower than native file system performance.

● Copy-on-write is a strategy of sharing and copying files for maximum efficiency. If a file or
directory exists in a lower layer within the image.

● COW is different for each storage driver


Docker - Storage Drivers
● For AUFS, Overlayfs, Overlay2:

○ Search through the image layers for the file to update. The process starts at the
newest layer and works down to the base layer one layer at a time. When results are
found, they are added to a cache to speed future operations.

○ Perform a copy_up operation on the first copy of the file that is found, to copy the file
to the container’s writable layer.

○ Any modifications are made to this copy of the file, and the container cannot see the
read-only copy of the file that exists in the lower layer.
Docker - Compose
● Compose is a tool for defining and running multi-container Docker applications

● With Compose, you use a YAML file to configure your application’s services. Then, with a
single command, you create and start all the services from your configuration

● Example for docker-compose.yml
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
Docker - Compose
● YAML Reference Version (3):
○ build
○ command
○ configs
○ container_name
○ depends_on
○ entrypoint
○ env_file
○ environment
○ ports
○ image
○ volumes
Docker - Registries
● The Registry is a stateless, highly scalable server side application that stores and lets you
distribute Docker images.

● The Registry is compatible with Docker engine version 1.6.0 or higher.

● The simplest way to achieve access restriction is through basic authentication (this is very
similar to other web servers’ basic authentication mechanism).
Containerd
● containerd is an industry-standard core container runtime with an emphasis on simplicity,
robustness and portability.

● containerd can manage the complete container lifecycle of its host system: image transfer
and storage, container execution and supervision, low-level storage and network
attachments.

● Consists of the following components:


○ CTR (containerd CLI)
○ A daemon exposing gRPC API over a local UNIX socket
○ Protobuf specs between components

● To understand containerd you need to know about OCI:


○ Image spec
○ Runtime spec (runc)
Containerd
● Containerd Runs the core container runtime:
○ Container execution and supervision
○ Image distribution
○ Network interfaces
○ Local storage

You might also like