You are on page 1of 21

Dr. D. Y.

Patil Pratishthan’s
D. Y. Patil Institute of Master of Computer Applications and Management
(Approved by AICTE, New Delhi & Affiliated to Savitribai Phule Pune University)
Dr. D. Y. Patil Educational Complex, Sector 29, Pradhikaran, Akurdi, Pune – 411 044
Tel No: (020)27640998, Website: www.dypimca.ac.in, E-mail : director@dypimca.ac.in
----------------------------------------------------------------------------------------------------------------------

MONOGRAPH

Subject Code: IT-41 DevOps


Unit 4. Docker– Containers

4.1. Introduction
• What is a Docker
• Docker is an open platform for developing, shipping, and running applications.
• Docker enables you to separate your applications from your infrastructure so you can deliver
software quickly.
• With Docker’s methodologies for shipping, testing, and deploying code quickly, you can
significantly reduce the delay between writing code and running it in production.
• Docker is a containerization platform that packages your application and all its dependencies
together in the form of Containers to ensure that your application works seamlessly in any
environment.
Container
• Docker provides the ability to package and run an application in a loosely isolated environment called a
container.
• The isolation and security allows you to run many containers simultaneously on a given host.
• Docker provides tooling and a platform to manage the lifecycle of your containers:
• Develop your application and its supporting components using containers.
• The container becomes the unit for distributing and testing your application.
• When you’re ready, deploy your application into your production environment, as a container
or an orchestrated service. This works the same whether your production environment is a local
data center, a cloud provider, or a hybrid of the two.
• Use case of Docker
• Fast, consistent delivery of your applications
• Docker streamlines the development lifecycle by allowing developers to work in standardized
environments using local containers which provide your applications and services.
• Containers are great for continuous integration and continuous delivery (CI/CD) workflows.

• Responsive deployment and scaling


• Docker’s container-based platform allows for highly portable workloads.
• Docker containers can run on a developer’s local laptop, on physical or virtual machines in a
data center, on cloud providers, or in a mixture of environments.
• Docker’s portability and lightweight nature also make it easy to dynamically manage
workloads, scaling up or tearing down applications and services as business needs dictate, in
near real time.

• Running more workloads on the same hardware


• Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-
based virtual machines, so you can use more of your compute capacity to achieve your
business goals.
• Docker is perfect for high density environments and for small and medium deployments
where you need to do more with fewer resources.

• Platforms for Docker


• Docker runs an application inside a container, uses minimum resources, can be deployed
faster, and it can scale quickly.
• It is difficult to manage many docker containers running in multiple clusters in a live
production environment.
• Kubernetes and Docker Swarm are few solutions to manage large docker clusters, but
these solutions add up a lot of complexity and skill knowledge.
• Docker hosting platforms can manage multiple docker containers.

 Jelastic
o Jelastic is a multi-cloud platform that can host multiple tools/frameworks/applications
such as Docker, Kubernetes, Java, Ruby, Python, JavaScript, Go, etc. It combines
Platform as a Service (PaaS) and Container as a Service (CaaS) model.
o Multi-cloud availability is the most important feature of the Jelastic platform.
 Kamatera
o Create servers quickly with Kamatera and deploy your cloud infrastructure now. It
offers unlimited scale up and out along with a simple management console, an API
o In addition to Docker hosting, you can add load balancers, private networks,
and firewalls and run any operating system edition of Linux and Windows.
 A2 Hosting
o It has blazing fast SwiftServer to host docker, and it gives the best performance
possible.
o It gives you complete access to the environment; you get root access so you can even
edit server files according to your need. You can even change the operating system,
start/start/reboot the system.
 StackPath
o StackPath is known for CDN and cloud-based security platform.
o Edge computing provides distributed computing; it brings computation and storage
closer to the user’s location, which eventually saves the bandwidth and improves the
response time. StackPath platform supports the Open Container Initiative (OCI)
images.
 Google Cloud Run
o Google Cloud Platform (GCP) is one of the most popular cloud service providers
which has been growing across several geographies at a fast pace. Kubernetes, a
popular container orchestration tool, was originally developed by Google, so
obviously, docker hosting on GCP is very much possible and suitable.
o In GCP, Cloud Run is a serverless managed compute platform where you can host
and run docker containers. It is built on top of the KNative project, which makes the
workload easily portable across different platforms.
 Amazon Elastic Container Service (Amazon ECS)
o Amazon Elastic Container Service (Amazon ECS) is a highly scalable container
service with docker support. It is used to containerize your applications on AWS.
It provides windows compatibility and supports the management of windows
containers.
o It uses the AWS Fargate service to deploy and manage docker containers. AWS
Fargate takes care of server provisioning, cluster management, and orchestration;
you don’t have to worry about these; you just need to focus on resource
management.
• Dockers vs. Virtualization
Virtualization
• Virtualization is the technique of importing a Guest operating system on top of a Host
operating system.
• The advantages of Virtual Machines or Virtualization are:
• Multiple operating systems can run on the same machine
• Maintenance and Recovery were easy in case of failure conditions
• Total cost of ownership was also less due to the reduced need for infrastructure

Disadvantages of Virtualization:
• Running multiple Virtual Machines leads to unstable performance
• Hypervisors are not as efficient as the host operating system
• Boot up process is long and takes time
Containerization

• Containerization is the technique of bringing virtualization to the operating system level.


• While Virtualization brings abstraction to the hardware, Containerization brings abstraction to
the operating system.
• Containerization is also a type of Virtualization.
• Containerization is however more efficient because there is no guest OS here and utilizes a host’s
operating system, share relevant libraries & resources as and when needed unlike virtual
machines.
• Advantages of Containerization over Virtualization:
• Containers on the same OS kernel are lighter and smaller
• Better resource utilization compared to VMs
• Boot-up process is short and takes few seconds

Virtual Machines Vs. Docker

4.2. Architecture
• Docker Architecture
• Docker uses a client-server architecture.
• The Docker client talks to the Docker daemon, which does the heavy lifting of building, running,
and distributing your Docker containers.
• The Docker client and daemon can run on the same system, or you can connect a Docker client
to a remote Docker daemon.
• The Docker client and daemon communicate using a REST API, over UNIX sockets or a network
interface.
• Another Docker client is Docker Compose, that lets you work with applications consisting of a
set of containers.
• Understanding the Docker components
• Docker Container
• Docker Containers are the ready applications created from Docker Images.
• Running instances of the Images and they hold the entire package needed to run the
application.

• Docker Engine
• Docker Engine is simply the application that is installed on host machine.
• It works like a client-server application which uses:
• A server which is a type of long-running program called a daemon process
• A command line interface (CLI) client
• REST API is used for communication between the CLI client and Docker Daemon
4.3. Installation
Installing Docker on Linux
Step 1) To install Docker, we need to use the Docker team’s DEB packages.
$ sudo apt-get install \
apt-transport-https \
ca-certificates curl \
software-properties-common
*the sign “\” is not necessary it’s used for the new line, if want you can write the command
without using “\” in one line only.
Step 2) Add the official Docker GPG key with the fingerprint.
Use the below Docker command to enter the GPG key
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Step 3) Next, Add the Docker APT repository.
Use the below Docker command to add the repository
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
You may be prompted to confirm that you wish to add the repository and have the GPG key
automatically added to your host.
The lsb_release command should populate the Ubuntu distribution version of your host.
Step 4) After adding the GPG key,
Update APT sources using the below Docker command
$ sudo apt-get update
We can now install the Docker package itself.
Step 5) Once the APT sources are updated,
Start installing the Docker packages on Ubuntu using the below Docker command
$ sudo apt-get install docker-ce

Installation of Docker on windows


1. Go to the website https://docs.docker.com/docker-for-windows/install/ and download the docker file.
2. Then, double-click on the Docker Desktop Installer.exe to run the installer.
3. Once you start the installation process, always enable Hyper-V Windows Feature on the
Configuration page.
4. Then, follow the installation process to allow the installer and wait till the process is done.
5. After completion of the installation process, click Close and restart.
Start Docker Desktop Tool
• After the installation process is complete, the tool does not start automatically. To start the
Docker tool, search for the tool, and select Docker Desktop in your desktop search results.
• Before starting the application, Docker offers an onboarding tutorial. The tutorial explains how
to build a Docker image and run a container.
• You are now successfully running Docker Desktop on Windows.
• Next, follow the instruction below to install the Docker engine on your system.
• Go to Docker CLI and run the Docker version to verify the version of Docker installation on
the system.

• Docker commands
1. docker –version
This command is used to get the currently installed version of docker
2. docker pull
Usage: docker pull <image name>
This command is used to pull images from the docker repository(hub.docker.com)
3. docker run
Usage: docker run -it -d <image name>
This command is used to create a container from an image
4. docker ps
This command is used to list the running containers
5. docker ps -a
This command is used to show all the running and exited containers
6. docker exec
Usage: docker exec -it <container id> bash
This command is used to access the running container
7. docker stop
Usage: docker stop <container id>
This command stops a running container
8. docker kill
Usage: docker kill <container id>
This command kills the container by stopping its execution immediately. The difference between
‘docker kill’ and ‘docker stop’ is that ‘docker stop’ gives the container time to shutdown
gracefully, in situations when it is taking too much time for getting the container to stop, one can
opt to kill it
9. docker commit
Usage: docker commit <conatainer id> <username/imagename>
This command creates a new image of an edited container on the local system
10. docker login
This command is used to login to the docker hub repository
11. docker push
Usage: docker push <username/image name>
This command is used to push an image to the docker hub repository
12. docker images
This command lists all the locally stored docker images
13. docker rm
Usage: docker rm <container id>
This command is used to delete a stopped container
14. docker rmi
Usage: docker rmi <image-id>
This command is used to delete an image from local storage
15. docker build
Usage: docker build <path to docker file>
This command is used to build an image from a specified docker file

• Provisioning
 Docker-based provisioning involves the creation of a Microgateway Docker file from an
existing installation, building the image, and running it multiple times in a container
environment as depicted in the following figure.
 For the Docker-based provisioning the Microgateway CLI provides the createDockerFile
command.
• Microgateway Docker image
o For the Docker-based provisioning the Microgateway CLI provides
the createDockerFile command. The command creates a Docker file that can be
consumed by docker build for creating a Docker image. The Microgateway Docker
image contains an unzipped Microgateway package.
o The command takes the command line options detailed in the Command Line
Reference.
o You can run the Docker image to spawn a Docker container from the created docker
image.
o The Docker images resulting from Docker files created using
the createDockerFile command feature the following:
• Docker logging
o Microgateway Docker containers log to stdout and stderr. The Microgateway logs can
be fetched with the Docker logs command.
• Docker health check
o Microgateway Docker containers perform health checks.
HEALTHCHECK CMD ${MICROGW_DIR}/microgateway.sh status 2>&1 | grep 'Server active'
o The status command checks the Microgateway availability. If the status command
confirms an active Microgateway the container is considered healthy.
• Graceful shutdown
o When the docker stop command is used on a Microgateway container it performs a
graceful shutdown.
• Entrypoint support
o Microgateway Dockerfile exposes an ENTRYPOINT. The options provided to the
createDockerFile command are supplied to the ENTRYPOINT through a CMD
specification.
• RE support
o The createDockerFile command adds a Microgateway JRE to the Docker file so
that Microgateway Docker image can be self-contained. Since the custom base image
provides a JRE, the createDockerFile command supports the jre=none option to reuse
the existing JRE and not copy the Microgateway JRE.
o Microgateway provides a musl libc compatible JRE to support Alpine Docker-based
images. The Microgateway installation provides the musl libc compatible JRE in
the microgateway-jre-linux-musl folder. You have to specify the jre=linux-musl option
in the createDockerFile command to copy the musl libc compatible JRE. If there is no
base image specified the musl libc compatible JRE is copied. The available JRE options
are linux, linux-musl, and none. The default value for the jre option depends on the
docker_from value:
• If there is no docker_from value specified, then the JRE used is linux-musl as the default base
image is Alpine.
• If you specify docker_from value, then the JRE used is linux

4.4. Docker Hub


• Docker Hub is a Docker Registry, a cloud-hosted version, open-source, scalable server-
side application, and stateless.
• Docker Hub is a registry service on the cloud that allows you to download Docker images
that are built by other communities.
• It can manage the sharing and storage of Docker Images.
Docker Hub Features
• Repositories: It contains the Push and Pull process for container images.
• Teams and Organizations: It allows access to developer/user to private repositories of
container images.
• Docker Official Images: It Pulls and uses high-standard quality container images
rendered by Docker.
• Docker Verified Publisher Images: It Pulls and uses high-standard quality container
images rendered by outside vendors.
• Builds: It provides the mechanisms that automatically formulate container images from
Bitbucket and GitHub and push them to Docker Hub.
• Webhooks: It triggers certain actions after a successful push to a container to combine
Docker Hub with additional services.
• Docker Hub CLI : Docker implements a Docker Hub CLI tool that is presently
experimental and an API (Micro-service) that enables you to communicate with Docker
Hub.
Docker Registry
• Docker Registry is where the Docker Images are stored.
• The Registry can be either a user’s local repository or a public repository like a Docker
Hub allowing multiple users to collaborate in building an application.
• Even with multiple teams within the same organization can exchange or share containers
by uploading them to the Docker Hub, which is a cloud repository similar to GitHub.
• Pushing (uploading) and pulling (downloading) images are two of the most common
Container Registry tasks.

Downloading Docker images

Dockerfile:
• A Dockerfile is a text document which contains all the commands that a user can call on
the command line to assemble an image.
• Docker can build images automatically by reading the instructions from a Dockerfile.
• Can use docker build to create an automated build to execute several command-line
instructions in succession.
Docker Image:
• Docker Image can be compared to a template which is used to create Docker Containers.
• Read-only templates are the building blocks of a Container. You can use docker run to run
the image and create a container.
• Docker Images are stored in the Docker Registry. It can be either a user’s local repository
or a public repository like a Docker Hub which allows multiple users to collaborate in
building an application.
Docker Container:
• It is a running instance of a Docker Image as they hold the entire package needed to run the
application.
• Ready applications created from Docker Images which is the ultimate utility of Docker.

• Uploading the images in Docker Registry and AWS ECS


• Understanding the containers
• Running commands in container
• The docker exec command runs a new command in a running container.
• The command started using docker exec only runs while the container’s primary process
(PID 1) is running, and it is not restarted if the container is restarted.
• COMMAND will run in the default directory of the container. If the underlying image has
a custom directory specified with the WORKDIR directive in its Dockerfile, this will be
used instead.
Options
Name, shorthand Description

--detach , -d Detached mode: run command in the background

--detach-keys Override the key sequence for detaching a container

--env , -e Set environment variables

--env-file Read in a file of environment variables

--interactive , -i Keep STDIN open even if not attached

--privileged Give extended privileges to the command

--tty , -t Allocate a pseudo-TTY

--user , -u Username or UID (format: <name|uid>[:<group|gid>])

--workdir , -w Working directory inside the container

• Running multiple containers


Docker Compose

 Docker Compose is a tool that helps us overcome this problem and efficiently handle
multiple containers at once. Also used to manage several containers at the same time for the
same application.
 It is a YAML file which contains details about the services, networks, and volumes for setting
up the application. So, you can use Docker Compose to create separate containers, host
them and get them to communicate with each other.
 Each container will expose a port for communicating with other containers.

Compose Installation:
 Install Compose plugin: If you have Desktop installed then you already have the Compose
plugin installed.
 Create a docker-compose.yaml file that defines the services (containers) that make up your
application. So they can be run together in an isolated environment. In this compose file, we
define all the configurations that need to build and run the services as docker containers. There
are several steps to follow to use docker-compose.
1. Split your app into services
The first thing to do is to think about how you’re going to divide the components of your
application into different services(containers).
In a simple client-server web application, it could contain three main layers (frontend, backend,
and the database). So we can split the app in that way. Likewise, you will have to identify your
services of the application, respectively.

2. Pull or build images


For some of your services, you may not need to build from a custom Dockerfile , and a public
image on DockerHub will suffice.
For example, if you have a MySQL database in your application, you can pull MySQL image from
the hub instead of building it. For others, you will have to create a Dockerfile and build them.

3. Configure environment variables, declare dependencies


Most applications use environment variables for initialization and startup. And also, after we
divide the application into services, they have dependencies on each other. So we need to identify
those things before we declare the compose file.

4. Configure networking
Docker containers communicate with each other through their internal network that is created by
compose (eg service_name:port). If you want to connect from your host machine, you will have to
expose the service to a host port.

5. Set up volumes
In most cases, we would not want our database contents to be lost each time the database service is
brought down. A simple way to persist our DB data is to mount a volume.

6. Build & Run


Now, you are set to go and create the compose file and build the images for your services and
generate containers from those images.

Docker Swarm
It is a technique to create and maintain a cluster of Docker Engines. The Docker engines can be
hosted on different nodes, and these nodes, which are in remote locations, form a Cluster
when connected in Swarm mode.
4.5. Custom images
Follow the below steps to create a Dockerfile, Image & Container.
Step 1: First you have to install Docker. To learn how to install it, you can click here.
Step 2: Once installation is complete use the below command to check the version.
docker –v
Step 3: Now create a folder in which you can create a DockerFile and change the current
working directory to that folder.
mkdir images
cd images
Step 4.1: Now create a Dockerfile by using an editor. In this case, I have used the nano
editor.
nano Dockerfile
Step 4.2: After you open a Dockerfile, you have to write it as follows.
FROM: Specifies the image that has to be downloaded
MAINTAINER: Metadata of the owner who owns the image
RUN: Specifies the commands to be executed
ENTRYPOINT: Specifies the command which will be executed first
EXPOSE: Specifies the port on which the container is exposed
Step 4.3: Once you are done with that, just save the file.
Step 5: Build the Dockerfile using the below command.
docker build
Step 6: Once the above command has been executed the respective docker image will be
created. To check whether Docker Image is created or not, use the following command.
docker images
Step 7: Now to create a container based on this image, you have to run the following
command:
docker run -it -p port_number -d image_id
Where -it is to make sure the container is interactive, -p is for port forwarding, and -d to
run the daemon in the background.
Step 8: Now you can check the created container by using the following command:
docker ps
Creating Repository
Step 1: Create Your Docker ID
To share images on Docker Hub, a Docker ID provides you access and entry to Docker
Hub repositories and allows you to search images from the Docker community and
confirmed publishers.
Step 2: Create Your First Repository
To generate a repository:
Please Sign in to Docker Hub.
Click on Create a Repository option on the Docker Hub welcome page.
Name it <your-username>/my-testprivate-repo.
Set the visibility as Private
Click on Create option.
Step 3: Install and Download Docker Desktop
For installing and downloading Docker Desktop to push and build a container image to
Docker Hub:
Install and Download Docker Desktop. If you want to do it in Linux, download Docker
Engine.
Then sign in to the Docker Desktop application with the help of Docker ID (as created in
Step 1).

• Creating a custom image


 Go to the command line where we have Docker installed and login to the Docker Hub:
 Run following commands:
$ docker login
$ docker run
$ docker ps

• Running a container from the custom image


The docker container create (docker create ) command creates a new container from the
specified image, without starting it. When creating a container, the docker daemon creates a
writeable container layer over the specified image and prepares it for running the specified
command.
• Publishing the custom image
Pushing an Image
• $ docker container commit
• $ docker image
• $ docker image push

4.6. Docker Networking


• Docker takes care of the networking aspects so that the containers can communicate with other
containers and also with the Docker Host.
• Docker Networking as a communication passage through which all the isolated containers
communicate with each other in various situations to perform the required actions.

 A developer writes a code that stipulates application requirements or the dependencies in an easy
to write Docker File and this Docker File produces Docker Images. Dependencies required for a
particular application are present in this image.
 Docker Containers are runtime instance of Docker Image. These images are uploaded onto the
Docker Hub(Git repository for Docker Images) which contains public/private repositories.
 From public repositories, you can pull your image as well and you can upload your own images
onto the Docker Hub.
 From Docker Hub, various teams such as Quality Assurance or Production teams will pull that
image and prepare their own containers.
 These individual containers, communicate with each other through a network to perform the
required actions, and this is nothing but Docker Networking.
Container Network Model (CNM)

• Container Network Model (CNM) standardizes the steps required to provide networking for
containers using multiple network drivers. CNM requires a distributed key-value store like
console to store the network configuration.
• A CNM has mainly built on 5 objects:
• Network Controller,
• Driver
• Network
• Endpoint
• Sandbox
• CNM Objects
• Network Controller: Provides the entry-point into Libnetwork that exposes simple APIs for
Docker Engine to allocate and manage networks. Since Libnetwork supports multiple inbuilt
and remote drivers, Network Controller enables users to attach a particular driver to a given
network.
• Driver: Owns the network and is responsible for managing the network by having multiple
drivers participating to satisfy various use-cases and deployment scenarios.
• Network: Provides connectivity between a group of endpoints that belong to the same
network and isolate from the rest. So, whenever a network is created or updated, the
corresponding Driver will be notified of the event.
• Endpoint: Provides the connectivity for services exposed by a container in a network with
other services provided by other containers in the network. An endpoint represents a service
and not necessarily a particular container, Endpoint has a global scope within a cluster as
well.
• Sandbox: Created when users request to create an endpoint on a network. A Sandbox can
have multiple endpoints attached to different networks representing container’s network
configuration such as IP-address, MAC-address, routes, DNS.

Docker networking commands


1. docker network ls
• The command will output all the networks on the Docker Host.
2. docker network inspect networkname
• To see more details on the network associated with Docker
3. docker network inspect networkname
To see more details on the network associated with Docker

• Accessing containers
 Obtain the container ID by running the following command: docker ps
 Access the Docker container by running the following command: docker exec -it
<container_id>

• Linking containers
Container Linking allows multiple containers to link with each other. It is a better
option than exposing ports.
Step 1 − Download the Jenkins image, if it is not already present, using the
Jenkins pull command.
Step 2 − Once the image is available, run the container, but this time, you can specify a
name to the container by using the –-name option. This will be our source
container.
Step 3 − Next, it is time to launch the destination container, but this time, we will link it
with our source container. For our destination container, we will use the standard
Ubuntu image.
When you do a docker ps, you will see both the containers running.
Step 4 − Now, attach to the receiving container.
Then run the env command. You will notice new variables for linking with the
source container.
• Exposing container ports
 In Docker, the containers themselves can have applications running on ports.
 When you run a container, if you want to access the application in the container via
a port number, you need to map the port number of the container to the port number
of the Docker host.
 To understand what ports are exposed by the container, you should use the
Docker inspect command to inspect the image.
docker inspect Container/Image

• Container Routing
Container routing determines how to transport containers from their origins to their
destinations in a liner shipping network.

You might also like