You are on page 1of 60

Docker -- DevOps

Tuesday, May 23, 2023 2:30 PM

• Connect with me on Linkedin for any queries/suggestion


regarding the notes:-->
https://www.linkedin.com/in/mohammad-zayd-986210235/

• LECTURE 1

• Polyglot application/environment--using different programming languages in


a single server
• Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in
packages called containers-- platform as a Service -- like OS
• Docker Architecture
1. Server --> Docker server
2. Client --> Docker client
3. Registry --> storage/Directory --> File store

• Whenever you install docker -- you'll have to run docker server -- It will run
as a daemon/process
• For example:- in CentOS
• Daemon -- background running process --> long time running on machine, it
will load your OS boot/load
--- OS will take care of daemon
• Docker client -- will talk to docker server --whatever commands you run willl
be send to docker server
• Whatever is running above is called docker
• Docker server is a daemon and docker client communicates with the docker
server
• What is registry?
• Docker/dotcloud introduced docker files -- docker file = image -- .iso
• It will be stored in dotcloud or dockers -- directory -- it will be publicly
available to use
• We can also take and use from docker registry
• Registry = public directory
• It is recommended to have docker client and server in a single machine
• But it is possible to store them in different servers

• Docker Terminology:
image -- made from docker file
• Docker image is a one type of docker file -- it is made from a docker file -- we
can also create our own docker files -- from that we can create --- docker
image
• Docker image is always in read only

NOTES Page 1
• Docker image is always in read only
• That means we cannot edit docker images
• But we can recreate it
• Image can be executed
• Condition -- if we cant edit file -- that means we're not allowed to write
• We can use readymade images from the docker registry(publicly available
directory) -- free of cost

• How can we use docker images ?


we can take from docker registry then we can download and use it

CONTAINER
• It uses docker images(made from docker file or downloaded from registry)
• It will run as an application
• As soon as we run the docker image then a container will start running
• Running the docker image is called a process or a container


• Docker file --- is a set of instructions --- it has
application/configuration/library/dependencies -- when the application
runs -- then docker file is running
• When the container runs --- then all the configuration in docker file will run
as a process
• Docker image can run on any OS -- container is OS independent

• What are benefits of docker?


docker containers are standard , lightweight and secure

NOTES Page 2
• Download docker using -- link in course
• How to check docker version?
docker --version ---> will show docker client information
• More detailed cmd -- docker version
• Docker was developed in go programming -- because it is very lightweight
prog. Lang.
• Git commit -- where the code is kept -- it has a version -- because we are
using community version
• Built: when was your docker version build

• Whatever cmd you run ---> in docker cli -->command will execute in docker
and go to the server and then gives you the output coming from docker
server
• Docker CE -- open source = CE community
• Docker EE -- Enterprise engine -- you'll have to buy licence
• Cmnd -- docker images or docker images ls -- gives docker server
information -- like how many images are there

• There is a docker server --daemon without docker cli in it

• Docker container ps -- containers running


• Docker container ps -a --all containers

• Container : as a package of software because its contains application /


NOTES Page 3
• Container : as a package of software because its contains application /
libarary/dependencies/ package installation
• 3 tier architecture: server ,client , registry
• Qustion
1. Is docker run command used to pull the image , yes?
yes it will pull image from docker registry -- docker run hello-world
2. How to list docker images in the local?
Sudo docker images or #sudo docker image ls
3. Steps to create docker images ?
From docker file --> Docker file se banta hai --> we can write our own docker
files also-->
If we don’t want to write docker files --> we can pull the images or
download the images from docker registry
4. Os flavour in linux?
debian-->ubuntu,mint,suse,kali,fedora,arch,alma,aws
linux,solaris,rocky,centos,ubuntu,ATX,backtrack.puppet,oracle,etc
5. AVOID using above distro's in industry/production -- because they are heavy
weight
• Where docker is installed -- that is called docker host -- it could be any OS
• When we make docker files -- we have to specify our OS name--also we have
mention our application configuration

• If we pull image from the docker registry -- they are also created from
docker file -- in those files also
OS name is mentioned in the first line of those docker files
• In the docker world , we have 2 more images --> especially for docker

1. BusyBOX image : used for statements , print or any message -- also for
program/application execution.
Eg:- docker run hello-world
2. Alpine image: used for making web server on -- python, java, mysql,
promethoeous, nagio,etc.
• Both the images are lightweight and secure compared to the above
mentioned OS's

NOTES Page 4

• We can create a container from docker images
• If we install nginx,keyclock,rabbitMQ,mysql --> it will take a lot of time
• Instead if we use Docker image with nginx,mysql,etc in it --> then we will have
to only install once
• Now we will only have to run the image and it has all the applications,etc


• After getting the docker image --> what you will do?
• Run it
• If you made the docker files with apache webserver info and then image was
made in centos
• Then if you run the image in ubuntu -- apache server will be setup

• We have to register Docker registry:DockerHUB


• When we run docker run -- cmnd-- it will assign a random name
• If you want to have a specific name:-
• Docker run --name (name you want) hello-world
• To check --> docker ps -a
• We will have to do cleanup of the docker run commands that were run and they
went into docker existing state
• We want after running the container -- we want to remove the exiting docker
containers automatically
• Cmnd --> docker run --rm --name myhellow hello-world

• Sudo docker run busybox echo "Hello from busybox…"


• When This command run it first looks locally for the image then goes to the
internet
• Busybox is an OS -- it has not task -- so we have to give the task we want to make
it run in busybox
• Docker run busybox --> you'll not get an output --> but the container ran
• U can check by docker ps -a
• Task -- delete the image after running
• Docker rmi (repository name)
• You cannot delete the images used by stopped containers
• To delete container -- rm
• To delete image -- rmi

NOTES Page 5
• To delete image -- rmi
• Eg:- if run busybox 5 times
• Then there will be 5 used containers -- docker ps -a
• Then you'll see a single image -- docker images
• Then if you want to remove the images -- then'll you'll have to delete all the
busybox used containers -- after this you can remove busybox image


• It will create container and execute it with user defined name mycontainer

• This command will create a container and execute it with the user defined name
and then delete the container from the stopped container list

• Can we go Inside the container of busybox?


yes we can go ---> because container with being a process also keeps OS
• -it ---> stands for Interactive Terminal(IT) ----> by default shell is /bin/sh
• Docker run -it --name CONTAINER1 busybox
• After this try -- docker ps --in another terminal
• It will show the running container
• In the container --to check shell --> echo $SHELL -->to check about --> uname -a
• Docker start -- this command starts the container without printing any message
• It doesn't has enough capacity to start and print message like docker run

• Docker exec -it (running container name) -- it wont run


• Docker exec -it (running container name) /bin/sh -- it will work
• /bin/bash is too heavy for docker
• The file we touched before will be there in the busybox

• When you docker run --> then container starts --> when you exit container stops
• When you docker exec running container --> then you will go to IT --> when you
exit then the container won't stop
• How to stop then?
docker stop (container ID)
• Docker run --> creates a container and goes inside the container when you exit ,
the container will die
• Docker exec --> takes you inside the container and when you exit the container ,
the container doesn't die
• Docker images are read only and we can not edit it

List Concept
1. Docker image
--> Are read only ? Yes we can not edit image
2. from the image container is made -- > Process + we can go inside the container
and edit the

NOTES Page 6
and edit the
file if you want
Reason:-- When we create container it will add one editable layer over the image

• Docker images are readable and executable -- and we can download ready made
images

We will make our own docker files:

• Before we were pulling images by:


• Docker image pull , docker pulll image , docker run hello-world, docker run
alpine
• (Steps-->it find in local machine then look from website like dockerhub and then
download image and then it will create container)
• Now we will create our own docker file
• Recipe:-
• Docker file --> docker image
• We should know what we have to write in the docker file - how to write
instruction
• Then we will create image from the docker file

• You will make a docker file with contents/instructions in it and then you will have
to run cmnds to make docker image
• Those commands are called build commands
• When the image is made
• Then we can use run cmnd to make the docker image to docker container
• Then our container is live

• Docker file syntax:

• To build file:
• Docker build -t
• -t : to tag the image -->to give name to the image
• Build: to create image from docker file

• Cd /
• mkdir Doc_dir/
• Cd Doc_dir/

NOTES Page 7
• Cd Doc_dir/
• Ls -ltrh
• Vim Dockerfile

• Yum if you are not using debian


• We don’t want to write some message instead of installing packages -- possible?
YES
• FROM -- should be in capital
We have to specify OS here
• we did the same before with busybox
• Our docker file is now created
• Now run-- > docker build -t firstimage .

• Container is made in every step -- firstimage is the name of docker image and
above it is the image ID
• When we build from a docker file --- it makes docker image in multiple layers
• It keeps updating the layers frequently
• Circled in red is called immediate id --- when alpine is pullled
• Ip added one on top on another is called layer -- after each id other id's will keep
getting added
• All these layers are read only
• Before running the steps , it reads and counts the total lines
• The commands in docker file is in read-only mode

NOTES Page 8

• There was nothing the docker file -- so that it keeps running -- so the docker
image hence made doesn't keep running
• How to kill a running docker container:
• Docker kill (container id)
• When you build container then reads the docker file line by line
• If you run a docker file with error --> then it will create <none> image
• When building a

• Versioning concept:
• Tag will be latest, if you don’t specify any version
• To give version tag
• Docker build -t firstimage:v1 .

• Image id is same with different images and versions

1. Four layers <--- wrong --- correct -->one layer and other above layers taken from
cache
• Because we have already build using the above the commands
2. No, image id will change because we added a new command

• Ans) we will use &&(AND operator)


• If we build a new image using the same docker file above -- will image id change?
• Ans -- NO
• When making changes in images/docker container we can change the version to
show there is a change

NOTES Page 9

• time docker run --rm alpine sh -c "apk update && apk add curl" -- ( -c means
which commands you want to run debian container)
• Output:
• OK: 12 MiB in 22 packages

real 0m5.166s
user 0m0.033s
sys 0m0.024s

• time docker run --rm debian sh -c "apt-get update && apt-get install curl"
• Output:
Need to get 2993 kB of archives.
After this operation, 6256 kB of additional disk space will be used.
Do you want to continue? [Y/n] Abort.

real 0m24.802s
user 0m0.044s
sys 0m0.037s

• Docker image pull alpine == docker pull alpine


It will find in local , if not find it will pull/download from registry DockerHub
• Docker images == docker image ls
To see the docker images
• docker container run alpine ls -l == docker run alpine ls -l
It is used to make container from the image and run ls -l in the container --
container will execute command and stop
• docker container run alpine echo “help from alpine”

• Docker container run alpine /bin/sh


• Docker ps == docker container ls
• Docker ps -a == docker container ls -a
• docker start <container_name> or container id
Will start but not execute
• Docker start --attach <container name or container id>
It will start the container and execute it in interactive mode -- run container in
foreground
• Docker start -dit alpine sh
• --dit -- means detach --- Will start container and execute it in interactive mode --
in the background -- wont take you inside in container cli
• Comparison Alpine and Ubuntu/Debian size

NOTES Page 10
• Comparison Alpine and Ubuntu/Debian size

• To see full command when running docker ps or ps - a


• --> docker ps --no-trunc

• -d / -dit-- used to open process in the backgroud
• --rm -- when we use this, it will exit container after successfully executing it
• -p --port number
• sh -c is script

• Repository or directory or folder same thing


• Now we want to push our image in the Docker hub
• Commmands:
docker login
• Docker push == upload
• Docker pull =download
• Docker push ali --- will take latest image of ali
• You will request denied --> because it is trying to push in docker hub's repository
• We want to push into our own repository
• Systax to push into our repo:
• Docker tag <OldImageID:tag> <DockerHUBuserID> / <New ImageID:tag>
• docker tag ali:latest zayd123/ali:v1
• Docker push zayd123/ali:v1--> will push zayd123/ali:v1 in my our repo
• If we make change in zayd123/ali:v1 ---> we can make new version
• Docker tag zayd123/ali:v1 zayd123/ali:v2
• What happens if?
docker tag zayd123/ali:v1 zayd123/ali:v1 --- data will get OVER WRITTEN!!

• What we do is make docker file ---> from that we make docker image ---> from
that it is made a docker container
• Now we will do the reverse container --> images
• From container to image --> we use commit command
• We learn't before we can't edit docker image --> now we can revert docker
image to docker file and make changes

• Suppose we forgot to download git and we made an alpine os container --> use
apk add git <--- cmnd

• Docker commut : it will create docker image from docker container


• Syntax:
Docker commit <old container> <newimage>

NOTES Page 11

• Suppose a junior deletes a container(my_cont) with everything ---> if you made a


docker image of the container and pushed it in dockerhub,aws,google
cloud,github,gitlab,etc then you can retrieve that image and hence also the data
lost .
• Docker rm my_cont
• Docket run -it --name mysecond_container myfirstimage_cont /bin/sh --> and
we got our old data of my_cont
• Important in production

• Question
• I have taken an Ubuntu image , then created a container from Ubuntu image and
after that I ran a Ubuntu container then exited it

• Can I run this command? --> docker commit <Ubuntu Container ID>
<Dockerimage>
• Ans) yes, if the container is running simultaneously you can make a docker image
from that

• Will the image id be same? Ubuntu image and the image we made by commit
command
• No it will be diffferent

• Use cases:
• We can rename a image by -- docker tag oldname newname
• Will the imageID change?
NO, it will be same
• If we change its name again? Will image ID change now?
No

• If we use a docker file named Dockerfile


• If we build a image from that abc:v1

NOTES Page 12
• If we build a image from that abc:v1
• We build another image xyz:v3
• Will image ID be same?
YES

You made a docker image from a docker file


Then changed docker file and made a new image
• Will imageID be same for both the images?
NO

• Docker container made from an alpine image --> now we run docker commit and
get a new image --> both images will have different imageID's -- docker commit =
new imageID generated

• Docker tag command --> used to rename the image or give name of the image

•From the alpine image we made 2 containers


1)Cont1
2)Cont2
•And we ran docker commit command
•Now we'll have two images
Image 1 and image 2
• Will imageID of both the images be same?
NO

• If we build a image without giving a tag ---> you'll get a dangling image
• Make changes in docker file --> make image without giving tag --> you'll get a
dangling image with different image ID

• If repo and tag name is none but image ID is present in your docker image then it
is called --> dangling image

• How to filter the dangling images?


docker images --filter "dangling=true"
• Docker ps -a -q --> to filter out only the container ID's
• How to delete all unsused/stopped containers
Docker rm $(docker ps -a -q)

NOTES Page 13
• There are 3 components --> docker client, docker host and Registry

• To stop docker deamon --> systemctl stop docker.socket


• After this commands will not work

• Docker.sock file connects the client to the server


• If docker gets corrupted then we kill and remove the docker.sock file

Docker port mapping

• In our server apache and mysql server is running in container 1 and 2

NOTES Page 14
• In our server apache and mysql server is running in container 1 and 2
respectively
• Now we want to change the port because security person has told us to change
for security purpose
• Docker host and container have there own ports
• Container 1 and 2 has port 80 and 3306 ports respectively
• And the port of the host that the user will access is 8080 and 6603 respectively

• We map port with -p flag --> on the left is the port of docker host and on the
right it is port of docker container
• Eg:-docker run -it -p 9000:80 --name mywebserver nginx:latest ---> by default it
starts in attach mode

• If we want to start the server in detached mode --- it means in the background --
then use --attach flag with the command

• Docker start --attach mywebserver --> will display log and message in terminal
>After docker start command --> when you exit --> container will keep
running --> opposite for docker run
• Docker start webserver --> it will not display log and messages in terminal --> it
will run in the background(detached)

• Check if port open --> netstat -tulnp | grep -I 9000 ---> docker-proxy port
• Check is nginx running --> curl http://localhost:9000
• When we start nginx in attached mode then if we exit using ctrl+ c from the
terminal it will stop the service

• When we run docker start --name webnew nginx --> we didn't do port mapping
with -p --> by default it will connect with port 80

USE CASES
• If we run --> docker run -it -p 80:80 --name web1 nginx
• Will it run?
• No , it will go in exit status because of port conflict

• If we run --> docker run -it -p 80:80 --name web1 nginx


• Will it run?
• No , it will go in exit status because of port conflict

• Now both web1 and web2 willl be visible when you run docker ps -a

• Now if we run -->docker run -it -p 8080:80 --name web1 nginx

NOTES Page 15
• Now if we run -->docker run -it -p 8080:80 --name web1 nginx
• Will it run?
• No, it will not run because name is already given --> name conflict

• Now if we run -->docker run -it -p 8080:80 --name web3 nginx


• Will it run?
• Yes , it will run

• Now if we run -->docker run -it -p 8001:80 --name web4 nginx


• Will it run?
• Yes , it will run

• If our host machine has enough resources cpu,memory, file-limit . Then we can
run multiple containers and no conflict of container port will happen

• Homework :
• Read about socket
• Read about ip tables/ networking as per linux (IP address / subnet )
• Docker push dockuments --> refer all notes

• you can map multiple host ports to the same container port and the opposite is
impossible(except they have different IP address). Just think about how can the
postman deliver things to two houses, with the same house number.

Deploy Application
• We will now deploy application from scratch -- we will deploy the code we got
from developer
• Load on docker engine is less compared to VirtualMachine

• Each container is running an application


• Docker is lightweight, secure and easy to use
• In some companies both docker containerization and virtualization is used -- they
have paid service so they want to utilize it (VM + Docker)
• Example:

NOTES Page 16

• Docker files, images and containers


• Docker files used to build docker images --> it contains plain text and it is a series
of instructions telling docker --what OS and application source code will do
• Docker images : docker image is static artifact that is build from dockerfile and is
tagged and published to registry like dockerHUB
• Docker container : it is running instances of docker images

Conclusion:
• Docker images combine source code with dependencies required to run
applications.
• Docker images are lightweight , portable and we can easily share with
developers.

• You will be assigned a developer team when you join with a docker profile
• Developer will tell you to put the code in production so that users can use
• Developer tell us developers to deploy the code in the server
• What you will say to the developer?
• Me:How did you run the code?
• Developer will share with you commands, packages and how did he run it --also
he will tell which port to open after getting approval.
• All requirements will come after approval, if no approval is there then you'll have
to raise change management ticket
• Your task is to run it on production server or UAT server

• Step1: take the code from the developer and run the application on your
machine as locally
• --> install node.js ,dependencies , yum install nodejs ---node version 5 -- take
from developer or download from website
• --> take project from dev.

NOTES Page 17
• Make ---> /docker/lect-10/package.json
• Mkdir /docker/lect-10/src/
• Inside src file there will be java script file index.js
• Take .rar file from lecture 10 and extract it using unar tool and get index.js and
package.json files -- you'll get this from developer

• For testing --> mkdir /docker/lect-10/testing


• Cp -vrf package.json src/ testing/ --- this will copy the file package.json and
directory src to testing directory

• We have kept package.json and index.js in testing folder instead of lect-10


• We have to check which packages will rpm/npm install and what packages will
express download
• Where we will get these instructions -- from the developer
• Zero step before deploy
• best way to proceed if we want to do some work
• We have to make habit of checking everything in system
• Refer nodejs installation file and run commands
• Then you'll get two new files in testing dierctory --> package-lock.json and
node_modules
• Install npm package
• Then --> npm start
• After that you have to open port --> network team will open the port
• Manager will ask you -- did you perform zero step before deploy
• Yes , the thing we did in the backend is called zero step

• Now we have dockerize/containerization this application


• After that anyone can use

• Delete testing folder

• We want structure like this /docker/lect-10/..


• In /lect-10/ -- we make a dockerfile
• ALWAYS USE OFFICIAL IMAGE!!
• Vi dockerfile

NOTES Page 18
• Vi dockerfile

• ENV -- means we made the 8000 port the environment variable


• ARG -- argument
• PORT is variable with value 8000
• WORKDIR -- we want to make directory with the name of app
• We want to copy src folder in container --- in the directory app
• Package.json file will be copied to current directory
• EXPOSE -- the port we have set in the environment will be exported-- it will do
port listening in the container
• EXPOSE is important/compulsory to write if we want make it listen to port we
have set in the environment variable
• RUN --- it is use for the installation of any packages and updating
• CMD -- it is used to run any command or to run your container in active state--
eg:- systemctl start …

• New thing in src/index.js


• We have given 800 value to PORT variable
• And port has value 4000

• Now --> docker build -t my_node_app .


• It is heavyweight -- will take time to build depending on your internet

• Size difference Between alpine and nodejs image
• Docker run --name my_node_app1 -p 8000:8000 -d my_node_app
• Docker exec -it my_node_app1 /bin/sh ---> we will enter into the running
container
• How to exit without stopping the container? --ctrl + p + q ---works even when
you run a container with docker run command
• my_port=5000
• docker run --name my_node_app2 -p 9000:$my_port -d -e PORT=$my_port
my_node_app
• Now, we will access the container from the left side , i.e host port using
command:
• Curl localhost:9000

NOTES Page 19

• When you pass command like this -- it will overwrite the value that is given in the
dockerfile -- in dockerfile it is written PORT=8000
• But we overwritten it and made the port
• You cant keep your docker image name in caps

• Docker run -it -p 90:80 --name mynginx nginx


• This command will run nginx in the terminal
• Then run --> ctrl + p + q ----> this will send the container process in the
background and it will not stop
• When you run this again--> Docker run -it -p 90:80 --name mynginx nginx
• You will get port conflict error -- but it you will see in stoppped containers
• usefull command --> cat > index.html --> it will paste from your clipboard and
redirect to the file

• Difference between exec and attach?


• Exec is used to get interactive console of the container which is already running
• Attach is used to see the console in the foreground when it is run -- by default
with docker run it will run in attached mode
• Docker log (container name) --> will show logs
• Zero step ---> for this make a testing folder -- test it -- then use it

• Curl localhost:8080 ---> run this in the container
• Curl localhost:49160 --> run this outside of the container
• Practical phase 2 --> q2

NOTES Page 20

• NOTE:Host'0.0.0.0' --> means anyone can access


• Curl -i --> this includes the protocol headers in the output

• docker commit -p -a "zayd <zaydansari786@gmail.com>" -m "Changes on


index.html file" b6579af02eaa zayd123/myboximage

• -p will pause the container running container when committing


• When we inspect busybox container

• Docker inspect <container name> --> use this cmnd on running container

• Then run it
• Then docker inspect <container name/id>

NOTES Page 21

• We have changed CMD


• Couldn't get ip after running docker inspect cmd-- troubleshoot later

• Label -- author
• Wordir -- it will go to the directory
• Cmd -- run the command in that directory --current directory
• docker run ubuntu:18.04 /bin/sh -c "while true; do echo hello zayd; sleep 1;
done"
• This willl run hello zayd command every second in ubuntu:18.04 image container

• Docker container prune -- will delete all containers running and stopped -- BE
CAREFUL!! When running in production
• docker rm -v $(docker ps -aq -f status=exited)
• Docker pull --all-tags <image_name> ---> this command will pull all the versions
of the docker image from the docker registry/dockerHUB/repository
• When you build container running jenkins server -- you'll be prompted to give
password

NOTES Page 22

• Go inside the container and go to this file


/var/jenkins_home/secrets/initialAdminPassword -- cat the password
• Jenkins is successfully running
• We have to give this link to developer --> http://localhost:7000/

Lecture 13

• MAKE DATA PERSISTENT


• If we make a container and then someone deletes the container -- we'll lose data
inside the container
• We are losing data -- to fix this we have to make the data persistent
• If the containr gets stopped,killed , removed,killed or anything --- there should
be no data loss , since data in production environment is important
• So, here come docker volume --- we have to learn docker volume so that we can
store data permanently in container or the base machine
• Data can be stored in 3 ways:
• 2 ways are very important and 1 way is not that important
• So ,we'll now make docker volume
• Docker storage types:
>Volumes(imp) -- /var/lib/docker --base machine
>Bind Mount(imp)
>tmpfs (not imp)
• We have our container --- we have a docker area, filesystem , ram inside our
base machine
1. Volume : it is also called docker volume, data will stored via docker area into
base machine.
2. Bind mount : it is also called docker bind , Container data will sored into
filesystem like vfs,
zfs,overlay overlay2 etc
3. tmpfs : its stored into RAM when reboot your machine then your data will
wipe out.
• If we make the image out of the container then data is stored in the image from
the container --- this is different concept-- this is called containerization -- data is
stored indirectly

NOTES Page 23

• Docker area is the home directory of docker


• Docker home directory is the docker data
• Default Path --> /var/lib/docker
• Good practice is to do stuff in home machine before doing in production
• We can see volumes directory in this path or with this command --> docker
volume ls
• Like we have container id, image id we also have volume id --> volume name

• With -g in we can set docker area path in docker.service file -- also we can find
out the path
• Where are our docker images path? ---

• Docker images path in my machine


/var/lib/docker/image/overlay2/imagedb/metadata/sha256

• Bind mount is like we mount our partitions to directory in linux

Lecture 14

• In IT company code is developed that we have to


containerize/dockerize/kubernetes
• You have to create docker files -- this is why they have hired you -- without you

NOTES Page 24
• You have to create docker files -- this is why they have hired you -- without you
they could just download image :)
• Docker images have a policy --- you'll have to keep it in companies private
registry
• When you pulled docker images --- then you;ll have to run container with the
image
• You'll need to check Container and host ports
• Check env.variable
• Then run docker
• Then you need to make data persistent --- using the three types


• -v -- for docker volume -- data will be stored in /my_storage -- this will be created
in the container --not in the base machine
• How can we see the volume id of the container we made?
we can see using docker inspect command --- in Mounts -- source
• Inside docker inspect container1

• You can see the mapping -- volume id --> /my_storage


• Whatever data you put in /my_storage in the container-- it will be reflected in
/var/lib/docker/volumes/30e086d304a453e264dfd1270ce06f90adcdba7b95e9f0
dc22a5f9f1ba542edb/_data/

• Using docker inspect you got source(docker area) and destination path (path in
container) -- also get volume id
• So, you can access data through container and also from base machine

• Docker volume --help


• If you want to remove volume id then first you'll have to remove the container

• BIND MOUNT --- it is connected with our base machines file system

NOTES Page 25

• exi
• -v .. -- left side of colon is path in base machine and right side is path in the
container
• Both these two paths are synced
• Give the developer the path and he'll wrtite whatever he wants and it'll will be
reflected in the container
• If we give -v … -- with colon -- that means bind mount
• And if we give -v .. -- without colon -- that means docker volume

• If we remove the container then then the data will still be in data area or bind
mount path in the base machine
• Boom!, we have made data persistent
• Bind -- method 2
• Docker run -it --mount type=bind,source=/home/aadmin/my_data/:/opt/data
alpine /bin/sh

• Tmpfs --
• First way:

• Second way:
• Docker run -d -it --name tmptest2 --tmpfs /app alpine /bin/sh

• Both do the same thing


• With docker inspect

• Nodejs app with persistent storage


• Package.json file and src directory is given by the developer
• We have created dockerfile , with COPY src src --> this will copy the src folder in
the container
• If we make image from the container then data will go into the container ,i.e src
directry and package.json
• If container dies then the src folder in the container will be gone along with
whatever changes we made in container
• So, we will use bind mount to make the data persistent

NOTES Page 26
• So, we will use bind mount to make the data persistent
• What we are gonna do is keep src folder and package.json in docker area

Lecture 15

• We know node image is very heavy


• Now use different variants -- to make it lighter
• Docker pull node:15
• Docker pull node:15-slim
• Docker pull node:15-alpine

• All are official images


• No issue will be there if you use alpine version to do your work
• Node:15 --> 936 is image size
• Experienced devops engineers chooses light weight image
• We know -- two types of images -- alpine, busybox
• We also need to manage best size along with just making containers as a
devops/docker admin
• If you don’t manage size properly -- you'll be told , you are eating companies
data

Changing dockers home directory/ default path -- /var/lib/docker


• There is a key.json file in /etc/docker/
• To change home directory of docker -- go to /etc/docker/ -- make file
daemon.json
• Mkdir /home/docker
• Vi daemon.json

• :wq
• Now we need to stop docker service --> systemctl stop docker --- also ---
systemctl stop docker.socket --also -- systemctl stop docker.service
• All our data is in /var/lib/docker/ -- so we will copy all the data to the new home
directory
• Cp -axT /var/lib/docker/ /home/docker/

• we have to open the files we saw status of docker.service file


• Vi /usr/lib/systemd/system/docker.service
• We have to addd -g here and give path /home/docker

NOTES Page 27

• Now we will start docker.service-- you'll get error


• We will need to reload using --- systemctl daemon-reload
• Then we can start docker service then docker.socket without getting any error
• Then try to run docker commands normally
• Image ID path : /home/docker/image/overlay2/imagedb/content/sha256
• Our path is set but only problem is that it is not shown when running systemctl
status docker.service
• After accessing some content it will start showing the path
• Suppose manager tells ,we want to change home directory to some nas, san or
somewhere else
• Then you'll have to follow the above procedure

Lecture 16
• We are going to learn multi staging
• suppose we want to make docker file for production,uat,dev environment then
we will have to make different docker files for each environment separately
because every env. requires different dependencies
• Here comes multi-staging to solve this problem
• We will create docker file along with multi staging
• Copy docker file and then make some changes

NOTES Page 28

• As prod -- is alias
• Why are we doing multi-staging? -- because we have different environments dev
and prod
• So since there are two environment -- there will come two FROM
• Npm run start and npm start command both are same
• npm start is short form of npm run start
• Last line -- if you don’t give tag it will take as dev
• FROM prod as dev --- will replicate from prod to dev
• After saving -- run npm install where ---dockerfile is kept
• You'll see new directory --node_modules
• Now , we will build image
• Before multi stages docker image creation: docker build -t my-node-app:prod .
• After Multi stages docker image creation:
• docker build -t my-node-app:prod --target=prod .
• Command for dev environment will not run only prod commands will run
• docker build -t my-node-app:dev --target=dev .
• It has inherited prod to run dev's commands
• When you see both the images --- prod and dev images has different sizes

• In node_modules there is directory in it -- nodemon/
• It is not run in production -- this directory is used for monitoring
• Mostly debugging is not ON in production -- as it will increase traffic which will
then increase latency
• Dockerfile should not install the nodemon dependency -- dockerfile will detect it
as a keyword and automatically not install it
• Not recommended to manually removing it -- because you might get held
accountable for it
• Npm monitoring is not there in prod image -- this is why its size is less
• No need nodemon in prod -- and -- in dev there there is nodemon
• Why are you doing multi- staging, if you can just make two images?
• Because when we do multi-staging it will ignore the nodemon in production side,
i.e monitoring is not enabled in production side

NOTES Page 29
i.e monitoring is not enabled in production side
• In industry you will rarely make docker files --- but mostly you will change
port ,…etc
• Now will run containers
• docker run --name my_prod -p 8000:8000 -d -v /lect-16/src:/app/src my-node-
app:prod
• Also we can do ---- -v $PWD/src:/app/src -- it will take our current directory
where we are sitting and making the image at the place of $PWD
• docker run --name my_dev -p 8001:8000 -d -v /lect-16/src:/app/src my-node-
app:dev

• Docker exec and --- you will see nodemon is there in dev and not there in prod

Lecture 17 (IMP) -- if you understand docker compose then you know 70-80%
docker
• Docker compose
• We will have to install docker compose
• MY_ENV =prod -- assign variable
• Echo $MY_ENV -- will fetch the variable
• Docker build -t my_node_app:$MY_ENV --target $MY_ENV .
• MY_PORT = 8001
• -e --- will export
• Path: /lect-17/src
• Docker run -d --name my_prod -p 8000:$MY_PORT -v /lect-17/src:/app/src
my_node_app:$MY_ENV
• MY_ENV=dev
• Docker build -t my_node_app:$MY_ENV --target $MY_ENV .
• Docker run -d --name my_dev -p 8010:$MY_PORT -v /lect-17/src:/app/src
my_node_app:$MY_ENV
• Now, see how much effort it requires
• Both docker files and docker compose are same but docker compose is advanced
• Docker is used to bind many applications and manage it -- we are deploying
multiple applications with a docker compose file
• Docker compose file is build via yaml script
• Mkdir /docker-compose/
• Cat dockerfile

• Tree
.

src
index.js

NOTES Page 30
index.js

• We will create yml file -- install notepad ++


• Write in notepad++

• You can add multiples apps


• Copy and paste in docker-compose.yml
• Make hidden file -- .env
• And write in it

• To run docker compose -- run -->docker-compose up --- in the directory where


docker-compose yml file is there
• If we don’t give -d it will block the terminal
• Where you have docker compose yml file is kept there you can run --- docker
compose ps
• And you can run docker ps from anywhere

• What is this NAME? we didn't define the name like this


• Docker-compose -- this is the directory
• My_app -- this is the name of the service

NOTES Page 31

• When we stop the docker compose file which is running in the terminal --
with --> ctrl+c
• This will stop the docker compose file
• But you will still see it afte you run --> docker compose ps -a
>docker compose ps
• To remove this entry
• Docker compose down
• To change the composeddirectory_name -- we can move the files from docker-
compose
• Then docker compose down from the docker compose directory and up it from
the new directory we want

• Successfully changed the name
• Note: I kept directory name ABD -- but in the name it is abd

• Using docker compose we can make image and also run it


• For docker image we need dockerfile -- in the start

• take dockerfile with multi-staging and use docker compose to make image and
run it as container
• Make changes in yaml script

• . In context is same we do at the end of the build command we run


• Delete my-node-app images of dev and prod we made before
• Docker rmi $(docker images -q)
• We have dockerfile from lect-16 with the docker yaml file in the same directory
• We want the multi-staging dockerfille

NOTES Page 32

• Make the following changes above


• And then docker compose up
• Then new image will be created and a container will be up and running
• It will take the MY_ENV and MY_PORT from the terminal you ran docker
compose up command
• Where will the name of the image be taken?
• It is written in docker compose yml file --> as my_node_app

Lecture 18
• DATA CONTAINER

• Suppose
• Container1 --> data1
• Cotainer2 --> data2
• We learnt before --> same directory, directory create but we cant access your
existing data from another container
• Docker create -v /dbdata --name dbstore training/postgres /bin/true
• This will create dbdata name directory
• We can take alpine -- but there is a problem with it
• It wont keep data in continuous status, it will take it to exit status
• So we have taken postgres -- we can also taken nginx or any other
• What is /bin/true -- means it will keep the container running
• After running this command -- you will not see the container when running
docker ps
• You can see by docker ps -a -- with status -- Created -- because we ran /bin/true
command
• Docker run -d --volumes-from dbstore --name db1 training/postgres
• The dbdataa name directory which we created dbstore
• When we want to attach the container with the directory then we use --> --
volumes-from
• We have made data with name db1
• We can see with docker ps -- you'll find db1 there

NOTES Page 33
• We can see with docker ps -- you'll find db1 there
• Docker exec -it db1 /bin/sh
• Exit with ctrl+p+q

• We have created /dbdata directory in db1


• We want to mount /dbdata directory in db2
• docker run -d --volumes-from dbstore --name db2 training/postgres
• docker exec -it db2 /bin/sh

• Now both db1 and db2 /dbdata directory is synced

• docker inspect db2

• Docker inspect db1

Lecture 19 -- practical phase 3

NOTES Page 34

• Restart ---> in docker compose is very good as it will restart the container incase
the container stops due to any reason --> important
• Wordpress is webserver -- wordpress will work or get installled only when mysql
database is inserted in it and database is created
• When db is created then it goes to wordpress
• Db runs first then wordpress
• Sometimes second ones runs first -- so we have written "depends_on" -- this will
make it run after running db
• Bottom we have written volumes-- It means that we are creating docker volumes
not docker bind
• If we don’t give this in the bottom -- it will by default it will take doceker bind
• Now check using localhost:8040
• Go to app1-db.. Container
• Start it
• Exec it
• Then run --> mysql -u wordpress -p
• Use wordpress;
• Show tables;
• Select * from wp_users;

NOTES Page 35
Lecture 19
• In dockerfiles
• RUN --> it just executes the linux command -- for installation(apt,yum,dnf)
• Eg:- RUN apt-get install git
• Eg:- RUN python script.py -- but it is recommended to use CMD and
ENTRYPOINT -- to run script or to keep the container in running condition

• Docker run --> image to container

Q) Is the RUN command replaceable with any other command?


• NO, for installing package we will have to write it as a keyword

• METHOD-1
• RUN apt-get update
• RUN apt-get install -y curl
• Two intermediate layers
• RUN apt-get install httpd curl git update -y

• METHOD-2
• RUN apt-get update && RUN apt-get install -y curl
• One intermediate layer
• RUN apt-get install git && apt-get install httpd && apt-get install curl

• Which is recommend and why?


• Both are same but I will use second method because it will reduce intermediate
layers

• Method 1 is best, if we want to debug each individual command --

• ENV --> it is used to define a variable and also we can overwrite the variable
using -e command
• Docker run -e --->this will over write

• Unset -- meaning value of admin is now 0 or you can say it is nothing -- we use
this if we want to define variable to something else
• Docker build -t test .
• docker run --rm test sh -c 'echo $ADMIN_USER' --- it will print mark --- then
unset it
• Docker ps or ps -a -- you wont see test there because of --rm
• If we un without --rm then you can see the container in docker ps -a
• Now, if you start it -- it will run but not come in docker ps --- because there
aren't any command that keeps running continuously
• Export and expose--port --- both are same

NOTES Page 36
• Export and expose--port --- both are same

• Now,When you run --> docker run --rm test sh -c 'echo $ADMIN_USER' --- it will
print mark --- then unset it -- after building
• You wont see output here -- because we have written in a single layer
• Which will unset it and you wont see the output
• Whereas in the previous dockerfile we wrote in different layers --> which will
give the echo command output then in the next line it will unset it

• WHEN YOU ARE WRITING IN THE SAME LINE -- SOMETIME DATA CAN GET
OVERWRITTEN

• If we want to create directory in container without going into the container --


docker exec -d <container_name> mkdir /tmp/test
• If we want to run top command in the container without entering into it -->
docker top <container_name>

• Docker run --p 8000:80 --restart always nginx ---> --restart always <---it will
restart the container incase the container
stops due to any reason
• After running this command -- nginx server will be setup
• Now when you exit the contanier with ctrl+c --- it will restart it because of the
flag --- --restart always
• --restart=on-failure:3 <--- It will try to restart it 3 times -- if it doesn't restart
then it will give up and exit :/
• If you don’t give --restart always flag -- it will not restart it

Vi dockerfile

• With ARG we have defined a variable


• Def_value's value wil be stored in var_name
• Same thing we did with foo=other
• Now build image and run container --> run env inside container you will see -->
bar=$(var_name) --> we should get value instead of $(var_name) --> try to fix
this

NOTES Page 37

• Done --> I fixed by removing brackets from $(var_name)


• What variable we defined in ARG, its scope will be in docker files only

• docker run --rm --name test6 my_cmd:v1 ---> this will block your container for 5
seconds
• docker run --rm --name test7 my_cmd:v1 sleep 10 --> this will make it sleep for
10 seconds --> this will overwrite what is written in dockerfile,i.e sleep 5
• docker run --rm --name test7 my_cmd:v1 top --> this will show live running
processes in container
• Also we can write like this in the dockerfile

• We will get error because we didn’t give any end value


• And we cant overwrite when running docker run command, if we have used
ENTRYPOINT

NOTES Page 38

• Now, value of entrypoint and cmd will be combined and it will be sleep for 4
seconds

---------------------------------------------------------------------------------------------------------

• Every second it will keep running echo my daemonized container in the


background
• If we remove -d flag then it will do the same thing in our terminal (foreground)

Lecture 21
• AWS --> ECR --- registry of aws -- elastic container registry -- where we can store
repository/directory/folder
• There are many docker registry --> in public cloud

NOTES Page 39

• Developer develops code in his localhost ---> but suppose we did changes --> so
we have to keep older version somewhere --kind of like backup --> incase we
want to revert

• In banking environment they don’t store in public cloud --> if they do then they
keep it private by paying cloud providers money
• If you want to use AWS ECR then you'll need to install AWS cli
• Aws cli --> will push and pull from ecr
• when you push image in docker hub -- it will automatically make a repository
• But in aws , you have to make your own repository manually

• Go to repository --> private repository


• Aws tests in N. verginia --> it is cheap so we will select this ---> incase we get a
charge
• To push and pull in docker Hub
• Docker push <docker_hub_username>/imagename
• DockerHUB --> zayd123/ubuntu

• In aws---> <url given by aws>/repository name


554383714321.dkr.ecr.us-east-1.amazonaws.com/first_project

• Tag immutability --> if enabled-->it will not overwrite already available image in
repo
• Image scan --> paid ---> it will scan for vulneribilities ---> security team will give
green signal if the image is safe to install

• Now we will create IAM user


• Secret key is visible only at user creation time
• Only 2 access keys you can keep active or inactive

• export AWS_ACCESS_KEY_ID="<put access key here>"


• export AWS_SECRET_ACCESS_KEY="<put secret access key here>"
• Now from ecr in aws go to push commmand and follow the steps --> copy paste
it in the aws cli and you will get message login succeeded
• Now you can push and pull to ECR
• We have to change name before pushing
• Docker tag nginx:latest <repository_URI>/first_project:latest --> first_project if
our repository we made in aws ECR
• Unable to push in aws ecr -- try to troubleshoot later

NOTES Page 40
• Lecture 22
• ECS ---> elastic container services --> only for docker
• Best services for docker in aws --> ECS
• ECS is best services for docker in AWS services and its only service for docker
• In ECS there is cluster, task,definitions,services, container
• Is GCP there is GCS similar to ECS -- it does the same work but method is
different -- in the backend both are same
• VPC --- takes care of only networking
• IAM role --> you can create user and assign role of specific service or admin priv.
• Security group --> port , expose , access service
• Auto scaling --> VERY VERY IMPORTANT ---> 30 hours needed to learn this
• If you get aws job -- then you gotta learn --> you need 1 month to learn this all

What is cluster?
• More than one node
• If one fails --- other will start working --- as a result we wont get any downtime
• Load balance between nodes

• By default ECS runs in cluster mode

• Where our docker server is called our cluster


• Responsibility of service is how many containers we want to run
• All services we are running -- service is responsible of where they are run
• We have multilple containers running some services --- on those container we
will get request from users who are outside
• We will require only one container -- which will reply
• Where will the request go to? --this is managed by load balancer
• In ECS the work of Load balancer is manged by "service"
• What is task definition?
• We can define how much cpu,memory we want to give and manage IAM roles
• What is container definition?
where our docker image is coming from is defined in container definition
• We match container definition with image configuration

• Choose custom container definitin ion and give image URI , port,container name
• Task definition

NOTES Page 41

• Now we have come to service after selecting container and task definitions
• If we give more than 2 task -- it will cost
• If you run 1 service more than 1 hour -- it will be charged
• If we open port and didn’t open port in security group -- we won't be able to
access the container
• There are three types of load balancers in aws:
• Network load balancer
• Classic Load balancer
• Application load balancer
• In ECS --> it is application load balancer

• Cluster means group of servers


• We are making ec2 server in aws -- ecs says if you want to run container, there
are two options
1) EC2 along with ECS --- we have to do manually from scratch, what we were
doing before
2) FARGATE --> automatically service will run --no need to make container in ec2
• Fargate drawback --- you cant truoubleshoot at container level --- It is used for
running container with host

• Go to tasks and get publicIP


• Sending internal requests outside and bringing outside requests inside is
managed by 'service'
• Aws has somewhere made VM but we cant access it because of FARGATE
• Take public IP from tasks in cluster we made

• So we can run container using fargate in aws

LECTURE 23
• Gitlab has more features compared to github

NOTES Page 42

• You can see main --- it is branch like we did before multi-staging
• We can use this for staging
• Go to profile --> access token

• We can give the token to developer or testing team/person -- whenever he want


to push image or do something else --- he can use the token
• Go to package --> container registry --> we can upload images here --> this is
under dev project inside container regustry
• We can login to gitlab with the command given there
• Then we can change tag of image and push it to gtilab --> docker push
registry.gitlab.com/zaydansari786/dev

• Now we will pull and push in github


• Go to developer settings --> personal access tokens -->
• Now we can access github with CLI
• GIT_PAT=<token you got from github>
• echo $GIT_PAT | docker login ghcr.io -u MohammadZayd --password-stdin
• We are using github for version control management and for retrieving older

NOTES Page 43
• We are using github for version control management and for retrieving older
versions
Now we will need to tag before pushing

• docker push ghcr.io/mohammadzayd/mybusybox:v1

LECTURE 24 --(practical phase 4 )


• We can add content to a file like this also
• cat >Dockerfile <<EOF
FROM openjdk:11-jdk
COPY HelloWorld.java .
RUN javac HelloWorld.java
CMD java HelloWorld
EOF
• EOF will not be copied in the file
• We can also build with scripts
• Take help from developer

LECTURE 25
• AWS we studied ECR
• Gtihub,dockerhub and gitlab we pulled and pushed images to repo
• GitLab --> we will create workflow --> CI/CD pipeline --> and run it
• Create a blank project

• Important files
• This circled .yml will help to do automations ---> gitlab-ci.yml
• Similarly in github important workflow file --> .github/file.yml
• Make file with name .gitlab-ci.yml
• And also make directory with named public
• Inside public make index.html
• Outside public make dockerfile

• mydevops --is the project name


• In gitlab you can create directory and in github you cant

NOTES Page 44
• In gitlab you can create directory and in github you cant

LECTURE 26

• Now what we will do??


• Create IAM use with these policies:
EC2InstanceProfileForImageBuilderECRContainerBuilds
AmazonEC2ContainerRegistryPowerUser
AmazonElasticContainerRegistryPublicPowerUser
AdministratorAccess

• Add variable

• Give in value the access key of your IAM user

NOTES Page 45
• Give in value the access key of your IAM user
• Give another key

• And one more as default region

• These are default variables we added -- our credentials


• We need to change docker registry in .gitlab-ci.yml file

• Make the following changes in .gitlab-ci.yml file


• Make changes docker registry -- get it from aws
• App ---it should be your aws repository name
• And change region if needed
• Now when we run the pipepline -- we can see the image in aws ECR repo
• We can now pull image from aws repository that we pushed from gitlab using
CI/CD pipeline

• Summary
• -->we created our own docker file using webIDE then pushed into gitlab repo via
automation via cicd pipeline --> created workflow
• we created our own docker file using webIDE then pushed into aws repo via
automation via cicd pipeline --> created workflow

NOTES Page 46
automation via cicd pipeline --> created workflow

LECTURE 27
• github not as advanced as gitlab ---> we cant create folder
• Upload image to aws ecr using github
• Go to your repositories
• Copy the http url from github in git cli
• Command to download and install git bash --> git init
• Extract code into your local machine
• Now run --> git add .
• Run --> git commit -m "Fresh code" ---if it doesn't work give email and password
using the command it tells
• Run --> git push
• Now running this you will see the files in your repository
• We will have to do authentication of github repo -- as we want to push github
repo content to aws ECR repo using automation
• Go to repo in github --> settings --> actions inside secrets and variables

• This will give you option to run workflow yourself -- if you don’t give this in .yml
file it will run workflow itseld
• Good to put this , so that we have freedom when we want to run

• LECTURE 28
• Docker Networking important
• If we know docker networking concept then it will be easier to understand
kubernetes networking, which is complex
• If container are same range -- they will be able to do communications
• If not then comes NAT concept ---- network address translation
• Each docker container has different IP
• Containers will do communications with each other but they wont interfere each
other
• Each container is isolated
• Suppose if one application(inside a container) goes down then it wont interrupt
other containers

NOTES Page 47

• Therefore, no library issue will come other container , if one container goes
down
• Where we will download these library of the containers? -- in docker engine
• Docker engine has lots of work
• There is a virt library which is found in base OS
• Docker engine does the work/(plays the role) of moving the library from host to
container

• Example:
• There is virtual library on host machine
• And we has 2 container on our Host OS --> to run the containers we need docker
engine
• One container wont interfere each other -- but they are the same library
• How will docker engine share virtual library with both the containers?

• Docker network model --> CNM --> Container networking model --> strong and
robust
• Kubernetes --> CNI --> Container networking interface--> very strong and robust
• If you understand CNM --> you will easily understand CNI

• Net

NOTES Page 48

• working product --> calico,sweeper

• CNM --> 5 main components --> Docker engine will bring these components
• Docker engine communicates with CNM --> but it wont communicate directly
• When we install docker --> libnetwork gets installed
• Docker engine communicates to CNM via Libnetwork
• Therefore, libnetwork communicates with CNM as funtions and class
• Using docker compose -- we can multiple containers as a service
• We can also put networking in docker compose
• Docker engine has 2 libraries
• Network driver
• IPAM driver
• Using these 2 libraries a network layer is generated
• The network layer is generated through docker engine
• For this network layer a container is made and IP is assigned to it
• Network sandbox component and container -- both together are attached to
network layer
• Until and unless container is there -- then there will be network sandbox -- it is
made after container
• IPAM=dhcp -- both work similarly
• IPAM will always allocate private IP, since it has pool of private IP's
• When container needs IP ---IPAM provides it
• What does IPAM driver do?
• It has 2 works
1) It makes and keeps a pool of IP's
2) It will assign/allocate IP from the pool
• Network driver
• Its work is to make the network
• Add,remove,allocate,etc the container
• Along with it there is network layer concept
• When we make VM machine -->Network --> NAT , Bridge , Host
• Network driver work --> create/delete or add/remove the network

• Why network sandbox?


• IPAM gives IP and network driver creates network through docker engine
• Containers communicate with other containers through network sandbox
• After getting IP from IPAM, if we need different range IP we can get it from IPAM

NOTES Page 49
• After getting IP from IPAM, if we need different range IP we can get it from IPAM
• Network sandbox does routing between containers, so that they can
communicate
• Also network sandbox has the work of managing the IP's allocated by IPAM
• Suppose we have two IP's, one with A range which can communicate with A
range container and another IP which is B range which can communicate with B
range IP containers
• Example: like we have HR in company -- HR makes a team and the manager is
responsible to manage the team
• So, HR -->IPAM, Network Sandbox --> manager , team--> IP's

• What will be sequence of CNM?


• IPAM driver will make pool of IP's and allocate the IP's
• Network driver will create network -- through docker engine Ip will be assigned
to container endpoint
• Then network sandbox will do routing between containers

• Three types of network drivers --> bridge, NAT, none


• None --> no IP will be assigned on this driver
• Host --> whatever series is in your host machine, it will be assigned to the
container --> does not has NAT concept -- host to host communication
• Bridge --> also has NAT concept ---> NAT:helps to do private to public

NOTES Page 50
• Bridge --> also has NAT concept ---> NAT:helps to do private to public
communication --> it will do communication between host machine and
container
>default --> docker0
>user defined --> you can create you own name

• Bridge network
• Bridge driver default here --> docker0 --> we can also use user defined
• All 4 containers running on same network and series
• Suppose our host machine has class C IP and container has class B IP , how will
they communicate?
• Bridge always uses NAT concept
• Docker0 driver will be created on our base machine -- and virtual IP of class B
will be assigned
• Example:
• Host machine IP : 192.168.1.1
• Container IP : 172..17.0.1
• Docker0 will do communication as gateway -- through NAT

Host Network
• There will be no driver here
• Here 4 container are made and ports are assigned to it
• Same base machines IP series will be assigned to the containers
• All the container will communicate with the host machine --there won't be a NAT
concept here
• Same container port can be assigned to many containers -- but same host
machine port cant be mapped with different containers

NOTES Page 51

• MacVlan Network
• assigns mac addess to containers -- very rarely used

None
• Container is made but no IP is assigned
---------------------------------------------------------------

• Most widely used is bridge network

LECTURE 29
• If we don’t use any network type -- by default docker0 will come
• Left side diagram-- both containers have different IP range but they will be able
to communication
• Right side diagram --- two IP's are assigned to web container
• We have attached db with web
• Whenever docker allocate IP to container --> it will be allocated through
veth(virtual ethernet)
• Why we hide db?
• Db is very critical component, we don’t expose it for security purpose ---> so we
connect it with web

• Docker network ls
None means null network
Alpine1 --> ip ---> 172.17.0.2
Alpine2 --> ip ---> 172.17.0.3
• ping <ip> -- it will from inside the container
If you try wirh name it wont work -- ping alpine1
• docker network create --driver bridge myabd
• docker run -dit --name alpine1 --network myabd alpine ash
• docker run -dit --name alpine2 --network myabd alpine ash
Alpine1 --> ip ---> 172.18.0.2
Alpine2 --> ip ---> 172.18.0.3
• We can now ping with name --- because we have made custom network
• This is called automatic network discovery
• 172.17.0.2 -->class B 255.255.0.0 N.N.H.H
• 172.21.0.2 --> class B 255.255.0.0 N.N.H.H

NOTES Page 52
• 172.21.0.2 --> class B 255.255.0.0 N.N.H.H
• We want to attach alpine4 with myabd after creating alpine4 container
• Yes possible --> docker network connect myabd alpine4
• Host network isn’t commonly used in industry -- myabdbecause it isolates
• Docker network disconnect myabd alpine
• Docker network rm myabd

LECTURE 30
• Docker linking --connecting two containers -- if we have db container and web
container -- we can link it
• It is not recommended in advanced updated docker versions
• We use tag for docker linking
docker run -d -P training/webapp python app.py
• What ever port is exposed in the docker file -- will expose here with --- -P flag
• App.py is there in the image -- so it is running it
• Like we type -- python and it runs python in our machine
• docker run -d -p 8000-9000:50000 training/webapp python app.py
• This command will tell which port is available in the range 8000-9000 and it will
take it
• docker run -d -p 127.0.0.1::5000 training/webapp python app.py
• This command will take the default port written in the dockerfile
• Docker port container name or id
• This command will tell the port mapping

• docker run -d --name db training/postgres


• docker run -d -P --name web --link db:db training/webapp python app.py
• --link <name or ID >: alias ----alias not compulsory
• Docker exec -it web sh
• Then inside --> cat /etc/hosts

• We can also ping -- but this is not recommended -- docker network is


recommended

LECTURE 31 --Docker Swarm


• Docker swarm -> it is a orchestration tool which manages multiple container with
HA
• It does the work of High Availability
• In companies mostly docker swarm isn’t used because aws and kubernetes are
used
• Docker swarm is also a type of cluster --manage muliple nodes, collection of
nodes
• Docker swarm good for small application -- kubernetes good for big applications
• This port should be open in aws ecs --> 3277
• No need to install docker swarm
• Docker swarm init --this will initialize swarm as manager
• To check the nodes in cluster ---> docker node ls

NOTES Page 53
• To check the nodes in cluster ---> docker node ls
• Like image id , container id ,volume id --- one id willl be generated
• To add a manager to this swarm, run this command in master machine 'docker
swarm join-token worker' and follow the instructions.
• To add a worker to this swarm, run the following command on another machine:
docker swarm join --token <token got from above command>
• We will have to use overlay network in order to deploy application in master
machine
• Overlay network is not default-- we have to make it
• Command --> docker network create -d overlay mynetwork
• When is ingree network created? -- when we run --> docker swarm ingress
• We don’t use default network because it is vulnerable, so we make custom --
mynetwork
• When we create overlay network -- we will get scope -- swarm
• What we were doing before vs what we will do now:

• docker service create --name webapp1 -d --network mynetwork -p 8001:80


training/webapp:swarm
• When we want to create cluster -- we have to create service
• Service is created then task is made then contianer is made
• Docker service ls --> to see the docker services we made
• For more detail -- docker service help
• These commands will show on master machine
• so see details of a service --> docker service ps <service ID or service name>
• We can go into the service with --> docker exec -it <contaner name> bash
• docker node update --availability drain client1.example.com --- client1 is
hostname of master machine
• After this you run --> docker service ps <service name>
• Now service has stopped in the master node and now it is running worker
machine

• How to make the service active again from drain status on master node?
docker node update --availability active client1.example.com

• To remove service? --> docker service rm <service name>


docker service create --name webapp1 -d --network mynetwork --replicas=2 -p
8001:80 abdealidodia/web:swarm
• This will make the service run on both the machines, unlike before it was only
running on master node
• Docker service ls
• Use service id --> docker service ps <service id>

NOTES Page 54

• How to check service logs ---> docker service logs <service name>

LECTURE 32
• How to create local repository for docker
• Repository is a one kind of godown which has pacakages and every image will
stored into one directory
• How many repository we have seen?
1) DockerHUB
2) Github
3) GitLab
4) ECR --> AWS
all these are third party repo -- public repo
• For dockerHUB --> docker login --> docker pull
• For local repository --> docker pull localrepo:8080/imagename
• By default localrepository will work on port ---> 5000
• How to push ino local repository --> docker push
localrepository:port/imagename
• localrepository will be made on host machine
• Command to make localrepo:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
• Now we can push images by tagging them to localhost5000/<imagename>
• We have our private repository -- we made a localrepository using container
working at default port 5000
• Now we normally we pulled from dockerhub an alpine image
• Now our junior or someone deleted it
• But.. We pushed the image into our local/private repository
• Here we can pull it --- but we will have to give full path
• Docker pull localhost:5000/alpine
• If we want to use our local repo when making docker file

• We can find the images we have pushed by going into the registry container and
using find command
• We can stop container then when we run docker rmi registry
• It wont delete because it is taking latest tag --so we use this command -- docker
rmi registry:2

LECTURE 33 -- Docker monitoring


• We use mostly ctop command in docker-- exactly like top command in linux
• Download from github.com
• This is third party software
• Docker has its own inbuilt monitoring software also
• Command --> Docker stats
• In production we mostly use in-built software

LECTURE 34 --DOCKER SECRET


• What is secret?
• It is just like very important asset, if it gets breached , we will be in big

NOTES Page 55
• It is just like very important asset, if it gets breached , we will be in big
trouble.Like example :-password.ssl key,certs,client token,payments token, pin
code, cvv
• It is very important to storing secrets

• What is docker secret?


A secret is a blob od data that should not be transmitted over network or store
unencrypted docker files
Or in your appplication source code.

Example:-username and password should not be in clear text -- it should be


encrypted

• We created .env file -- inside we wrote password -- it was hidden file but it was
not that secure because it was written in clearn text
• Docker secret command will only work with docker swarm
• You need two machines in your lab:
1) Master
2) Node
• docker secret only works with docker swarm
• So we ready docker swarm first
• Communication between master and node will be secure with tls and CA
• Whatever password you have,you can secure in docker secret
• You have to encrypt your password and stored in your cluster
• Cluster is a collection of nodes
• Password is stored in master and node needs the password
• Password is stored by master/manager
• Password sharing done encrypted and is secure
• Password is shared in hdd but node will have it in its memory
• As soon as the node stops -- the password will be wiped out since it is in the RAM
• We will have to create docker swarm before docker secret

• Master node
• Command --> echo "This is secret" | docker secret create my_secret_data -
• What you gave with echo will be the password
• Command --> docker secret ls
• In docker swarm both docker run and docker service are same
• Command ---> docker service create --name="redis" --secret="my_secret_data"
redis:alpine
• By default password will be in ---> /run/secret/<SecretName>
• /run is in the RAM -- it will be gone when you shutdown the machine
• The unencrypted secret is mounted into container in-memory filesystem:
/run/secret
• Command to see secret without enterint container --> docker exec -it
redis.1.vb12rocxeggksht6asmg8umd3 cat /run/secrets/my_secret_data
• Docker exec -it redis…. Sh --- we can see the unencrypted pssword

• Now if we want to remove password from the container -- not the container
• Command -->docker service update --secret-rm="my_secret_data"
<redis(container name)>
• Using docker stack to deploy service in docker swarm
• Go where we have kept docker-compose.yml file
• Command --> docker stack deploy -c docker-compose.yml <service_name you
want> --- this command will deploy the services in the docker compose f?ile into
the docker swarm

LECTURE -35

NOTES Page 56
LECTURE -35
• What is alias?
it is alternate name that you can give
• Cd /root/
• Ls -a
• Vi .bashrc

• Don’t make changes in this file -- because if you do some mistake then problem
• So add alias with command --> alias a='ls -ltrh '
• Now run 'a' on command line and it will execute ls -ltrh command

• docker ps --format '{{.ID}} ~ {{.Names}} ~ {{.Status}} ~ {{.Image}}'

• Now we alias this bigg command

• You can make alias and make things easier

LECTURE 36 --Archive your image and container


• Docker images --> docker container --> run as a process --> we can run our
application
• Container --commit--> docker image --> container
• Another option is there
• Docker image --> docker container --> zip --> another machine --> extract -->
create a container
• Commands --> docker export, docker import , docker save , docker load
• Export cmnd --> to export container
• Import cmnd --> to import container
• docker --help | grep -E "(export|import|load|save)"
• DOCKERFILE -->
vim Dockerfile
FROM busybox
CMD echo $((40+2))
• docker build -t maths .
• # docker save maths > maths.tar
Docker save command is used to create docker image to tar files.
• scp -vr maths.tar aadmin@192.168.1.9:/home/aadmin
• This command will send the files to the machine whos ip is written
• Or from another machine we can run --> docker import maths.tar math

NOTES Page 57
• Or from another machine we can run --> docker import maths.tar math
• We will get on that machine maths image
• But you run that image you will get error

• Save and load command works with docker images


• i.e if you use docker save command then on the other machine you will have to
use load command
• Cant use docker import command with docker save command
• Docker save --> create tar file
• Docker import --> able to extract tar file and make image but unable to create
container
• So we run this command on the receiving machine --> docker load < maths.tar
• Now we can run the image --> docker run maths

• Load and save works on images

• Export command will work on container


• Docker export <container ID/name> > Hello.tar
• Scp hello.tar .. To the receiving machine
• Now we will extract
• Mkdir hello && tar -xvf -C hello
• When we extract in the case of importing image --> we see layer
• When we extract in the case of importing container --> we will see filesystem
• Now in receiving machine --> docker import hello.tar hello
• Then docker run hello
• Getting error -- try to debug later -- sir has also gotten error

• Another scenario
• We can run docker export <container ID/name> > <filename>.tar
• We will get a tar file
• Now if the image and container gets removed then we can use
• --> docker import <filename>.tar <new_image_name>
• We will get the image and it will have all the contents were there in the
container that was deleted

Docker in Docker
• Docker container running docker

• How to use json file instead of YAML file?


--> docker compose -f /hom/zayd/docker-compose.json up

NOTES Page 58
NOTES Page 59
Interview Questions
Monday, May 29, 2023 2:48 PM

Docker daemons version is X and docker client version is Y, where Y<X ,, will they communicate?
• YES

• Recommended to use latest and same version of docker client and server

• If your docker gets corrupted then how you will recover your data , container and images
• Ans) we will kill and remove docker.sock filein /var/run/ -- after removing/deleting/(moving
somewhere else) when we start docker service -- new docker.sock will automatically be made

• Difference between CMD and RUN in dockerfiles?


• RUN --- it is use for the installation of any packages
• CMD -- it is used to run any command or to run your container in active state-- eg:- systemctl start …

• Where is data of our bind mount stored?


Data is stored in file system

• Where is docker database default path? Also called docker area or docker home directory
/var/lib/docker

• How can we see the volume id of the container we made?


we can see using docker inspect command --- in Mounts -- source

• What is the difference between docker ps and docker compose ps?


>to run docker compose ps command we have to be in the same directory where the docker compose
yml file is kept
• >docker compose ps reads the docker configuration data from yaml file


>whereas we can run docker ps command from anywhere
>docker ps is entirely command like based

• How to set name of docker compose?

• What is the difference between ENTRYPOINT and CMD in dockerfile?


• CMD command can be overridden while running docker run command and it will execute the new
command given when running docker run command , whereas ENTRYPOINT cannot be overriden with
docker run command .

• Docker network model --> CNM --> Container networking model --> strong and robust
• Kubernetes --> CNI --> Container networking interface--> very strong and robust

• By default localrepository will work on port ---> 5000

• Hot interview question --> what is kubernetes and explain each component
• Explain the components in details
• Orchestration means involving many teams --> managing many containers

Interview Questions Page 60

You might also like