You are on page 1of 20

Docker Commands

Docker runs its containers on the docker host/docker engine!

Docker run command:

Docker Run – allows to run any container through an image.


If the image doesn’t exist, it will download it from the docker hub.
Note – when we run a container and he finishes his task, he’s existing. So we
will no longer see him running, if he has a task, we will see him running.
If we give it a command it will run, and then exist, like
docker run ubuntu sleep 10 or docker run ubuntu cat /etc/*release* to print
the version of the ubuntu image
Note – if we dont specify a version of an image, it will download the latest
image version with the the tag - latest
Docker run –name ‘name’ image_name – will run a container and assign it a
name.
Docker run image_name:version – will download and run a specific version of
the image with the version as its tag.

Running in interactive mode / Receiving inputs:


Docker run –i image_name – will allow to run the container on interactive
mode and will let us give input to the container.
Note – by default, docker doesnt listen to input, so we need to specify it to
have input. It runs in a non-interactive mode
Note – The output we will get from the terminal, will not be showed, thats
because we are not connected to the terminal. To connect to the terminal,
run:
Docker run –it image_name command – allows the container to receive input
and show us output as we connect to the terminal of the container.
Docker run – it - - rm python – this command let us receive output and give
input to a python container and remove it at the end. This is also good for
testing

Running a Container with port


:mapping
Docker run –p 80:5000 app_name – this will map the app that will run on a
container, with the port 5000, to the port 80.
This means that if I get traffic from port 80 it will send it to the application
listening on port 5000.

We can also write this as:


Docker run –p in_port:out_port appname
where in-port is the incoming traffic outside local, to out-port of the listing
container port.

Note: we can access the app using the ip of the container, but this is only
internal ips. (to find out internal ip of a container, use the docker inspect
command, YOU SHOULD NOT ACCESS A CONTAINER WITH ITS INTERNAL UP,
ONLY BY USING PORT MAPPING.)

To access the app from external network, we need to map the port we listen
on and access the ip of the docker host.
To connect to the container access using localhost:port or through the docker
desktop
we can map as many ports as we’d like, but we can’t listen two containers on
the same port.

:Volume Mapping / Storage mapping


 If we run a container with a DB, and we delete it. ALL OF THE DATA WILL
BE DELETED.
To save the DB in the host, we need to map a directory/folder in the host to
the directory in the container
Docker run –v /path/data_dir:/var/lib/mysql mysql
This will map the /path/data_dir directory in the host, to the directory
/var/lib/mysql in the mysql container. SoSo,ll the data will be saved in the host
directory.
IF would delete the container, the data will persist on the hosts directory, and
if we delete the container and start a new one, it will continue from the same
place it has been on.

Detaching a container:
Docker run –d app_name – will run the container in detached mode, allowing
us to use the console.
When a container runs without –d, it will run in attached mode, meaning that
you cannot have access to the console.
To attach back to the container, run:
Docker attach container_id – to attach back to the container.
:Other commands
Docker ps – will list all the running containers and some basic information
about them.
Docker ps –a – Will list all the containers, those that are running, and those
that are not.
Docker stop Container_ID or docker stop Container_Name – Will stop the
container
we can also stop a container using ctrl+c
Docker rm container_name or docker rm container_id – will remove the
container completlly and we would need to download it again from the image.
Docker images – will show us all the container images.
Docker rmi image_name - will remove the image from our host
Docker rmi image_name:tag – will remove the image with the related tag.
Docker pull image_name - will only download the image.
Docker exec container_name command – will allow us to run a command on a
running instance
Docker attach container_id – will allow us to attach the container back (we
need to provide only the first chars of the container id)
Docker inspect container_name: will provide additional data regarding a
container.
Docker logs container_name: will show us the logs of the containers
.Docker search image_name – searches for a image in the hub
Docker Image:
To build a docker image (template), we need to write a dockerfile.
A dockerfile is a file that has instructions the builds the docker image.
When building a dockerfile we need to specify:
1. A starting base OS for the image – this is a must – on this OS the
container will run
2. Dependencies (This can be updating repository using apt, installing
dependencies using apt, and installing python using pip)
3. In case of code – copying source code to opt folder – this way it wont be
saved on the container volume and copied to a permanent folder.
4. Specifying Entrypoint – which command should run when we run the
container
After these build the image using the command – docker build
like so:
docker build . -f DockerFile –t image_name\tag
docker build directory_path –f DockerFileName –t image_name:tag
This will build the image locally only, if we want to push it to the registry
(dockerhub) we need to push it with:
docker push image_name\tag - this needs to be the same name we created
the image
The docker file is built using instructions and arguments for those instructions.
Each instruction is an operation docker is doing to build our image.
The docker file must start with a from command
When the dockerfile runs each line is run as a layer.
Example for a dockerfile:
FROM ubuntu -> this defines the base OS for the container
RUN apt-get update && apt-get install –y python3 python3-pip ->
this line runs commands on the base image.
RUN pip3 install flask flask-mysql -> to install dependencies for python
COPY app.py /opt/app.py -> this line defines that we will copy the
source code in the current folder to the /opt/source-code folder of the image
EXPOSE 5000 -> This line is just for documentation to note that we are
using the port 5000 for the flask app
ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run –host=0.0.0.0 --
port=8080 -> entrypoint means that this line will be run after we run the
container, which means it will go to /opt/source-code/app.py and run the
app.py file with flask run and the host will be us (local) on port 8080

Note that if we can also run a container and run an app manually, but then we
must do it each time. 1 then the app will run. But this is a headache and not a
good solution.

:The commands that need to be run are 1


Docker run -it –p 5000:5000 ubuntu bash
In the container
Apt-get update
Apt-get install python3 python3-pip
Pip install flask
Touch /opt/app.py
Apt-get install vi (this is to write the code)
Vi /opt/app.py
Write the code
Ctrl+s to save and then :wq
Run the app: FLASK_APP=app.py flask run --host=0.0.0.0
Environment Variables:
We can set in our code the line: os.environ.get(‘APP_COLOR’) - this line sets a
color of the app to be the color of the environment variable. This was, if we
want to change the color of the environment, we dont need to change the
whole code and update the containers. We Just update the variable and then
the code reads from that variable.

To set the color run:


ENTRYPOINT Export APP_COLOR=blue; python app.py (set this at the
dockerfile)
We can also use
ENV APP_COLOR blue
Here, export is a bash command used to set Envars.

We can set the color in the image, and then if we update the image, we just
pull it again and run. Or we can change it when we run the container like so:
Docker run –e APP_COLOR=blue image_name
the –e sets Environment Variables.
Then we can run different container with different environment variables

To find the Environment Variables, we run docker inspect and then we can see
at the config->env. There we can see all the variables.
I like to find it like so:
docker inspect XXX | grep –A 5 “Env”
This will find the Env word and 5 lines after, giving you the vars.
Command vs Entrypoint:
The CMD instruction in the dockerfile is a short for command, which means
that when we run the container, the arguments that will come after will be
run.
The difference between Command and Entrypoint is that command can be
overwritten, while the entrypoint cannot.

THE BEST PRACTICE IS USE A JSON FORMAT.


To write commands in the docker file we can do it in 2 ways:
1. CMD command1 param1 command2 param2
CMD sleep 10 python app.py - which sleeps firstly for 10 seconds and
then run the app.py using python
2. CMD [“command1”, “param1”, “command2”, “param2”]
CMD [“sleep”, “10”, “python”, “app.py”]

When we use an EntryPoint, what we will write at the ‘docker run’ command ,
will be appended at the end of the entrypoint.
So if we write ENTRYPOINT [“sleep”] and then declare
Docker run image 10 -> it will take the 10 and append it to sleep
So we will get sleep 10, and it will sleep for 10 seconds

Combining both entrypoint and command


Lets say we want to set a default value for the sleep command, this default
value can be overwritten. We can do it like so:

FROM ubuntu
ENTRYPOINT [“sleep”] -> the sleep wont be changed
CMD [“10”] -> the 10 is a default argument

Here the command will be sleep 10, but if we run


Docker run image 15, the 10 will be replaces by the 15
ALSO NOTE THAT WE MUST USE JSON, IF NOT, IT WONT WORK

We can also override the entrypoint by using –entrypoint when we run the
container like so
Docker run –entrypoint sleep2.0 image 10
Data Persistence in Docker:
The data in a docker container is Ephemeral file system. This means that when
a container dies/terminated/stops, all of its data will be deleted. This means
that if we have for example a database on a container. If the containers is
terminated, all of its database will be deleted.

To overcome this, we can use 2 volumes that can hold the data (that is why it is
called data persistence), using either named volume, or bind mount volume,
which will save the data if the container dies.

:Named volume
The named volume is the preferred way to save data with containers, we just
give the folder a name and Docker chooses where to store the data. In this
way, different containers can access the same name volume.
When using a named volume, Docker creates a new folder, and managed the
folder content for us.
Usage:
Docker volume create my_volume
To inspect to volume -> docker volume inspect my_volume
Docker run –v my_volume:/opt/app_files ubuntu
This way the my_volume folder will have /opt/app_files
We can also make another container access the same volume
Docker run –v my_volume:/opt/app_files nginx

:Bind mount volume


In this way, we map a directory on the host to a directory on the container. So,
when data is saved on the container it will be copied to the directory of the
host.
Usage:
Docker run –v c:/users/home/directory:/opt/app_files ubuntu
This way the c:/users/home/directory is mapped to opt/app_files
And when we terminate the container c:/users/home/directory will hold all the
data. If we create again a container and map it with c:/users/home/directory it
will have the same data of the last container.
 This relies on the host to have specific folders.
Docker Compose:
With docker compose we can create an application and run multiple
containers in through a single file.
The difference between docker compose and dockerfile that a dockerfile
creates an image, and with that image we can create container. And docker
compose we compose few images together.
This is easier to maintain as the infrastructure of the application stays as code,
and we can implement changes in more easy way.

If we were to run all the containers manually, we would need to link them
together.
Like frontend to backend, and DB.
To link two containers, we use the - - link in the docker run command.
This takes a container and set on his IP list the IP of the linked container, giving
him access to the linked container.
The - - link will be added like so:
- - link name_of_container:name_of_host_in_code
The name of the container is the name that we gave him when we run the
container. And the name of the host is as mentioned in a code.

When we create a docker-compose.yml file, we need to do it in format of


dictionaries.
If the image of the container doesn't exist, we can build the container through
an image using the build attribute.
We need to provide a folder which contains the code/files for the image and a
dockerfile so docker can build the image.

We can have 3 types of versions for docker compose.


In the first version, we need to compose the docker.yml and then link the
containers. Also, we need to mention the links in the .yml file. And, containers
in the first file aren’t created in specific order.
At the second version, docker creates the links, so the containers can
communicate to each other. Therefore, we do not need to specify links in the
second version. Also, it is important to create the containers in the right order
if one container is dependent on another one. We can also add a depends_on
attribute.
At the third version, this is to be continued
We can also mention which networks we are going to have, like a front-end
and back-end networks. We will mention them at the end of the file (in
version2 and above) like this:
Networks:
Front-end:
Back-end:
And then at each container, at the networks that he is attached to.
The networks are legit different networks that the containers run on them. So
if we want to seperate containers on so the won’t run on the same network we
use this.

Syntax for version 1:


Services:
Container_name:
Image: image_name
Ports:
 Host_port:container_port
Links:
 Linked_container

Syntax for version 2:


Version: 2
Services:
Container_name:
Image: image_name
Ports:
- Host_port:container_port
Depends_on:
- Container_name_5
Networks:
- Back-end
- Front-end
Networks:
Front-end:
Back-end:

Syntax for version 3:


Version: 3
Services:
Container_name:
Image: image_name
Ports:
- Host_port:container_port

So, if we want to create a container named vote, that has an image of voting-
app, and that voting_app image is not pulled, that receives on port 5000 on the
host to port 80 on the container. And linked to a redis conainter
we would do it like so:

vote:
build: ./vote_folder
Ports:
- 5000:80
Links:
- Redis

To compose the containers, we will use the command


Docker-compose up (for v1) - we do need to specify services at the start
docker compose up (for v2)
Example on the docker voting app:
Docker run - - name=redis –d redis
Docker build . -t voting-app -> in the voting app dir
Docker run - -name = vote –d –p 5000:80 - - link redis:redis voting-app
Docker run - - name=db –e POSTGRES_PASSWORD=postgres –d postgres:15-
alpine
Docker build . -t worker-app -> in the worker app dir
Docker run –d - -name = worker - - link=redis:redis - -link=db:db worker-app
Docker build . -t result-app –> in the result app dir
Docker run –d –p 5001:80 - -link=db:db - - name=result result-app

When we create the docker compose file, we needed to ensure we have


services at the start so docker knows which service is what and run it with
docker-compose:
services:
redis:
image: redis

db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: postgres

vote:
image: voting-app
ports:
- 5000:80
links:
- redis

worker:
image: worker-app
links:
- redis
- db

result:
image: result-app
ports:
- 5001:80
links:
- db
Then we changed the version of the file to V3 and removed links:

version: "3"
services:
redis:
image: redis

db:
image: postgres:15-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres

vote:
image: voting-app
ports:
- 5000:80

worker:
image: worker-app

result:
image: result-app
ports:
- 5001:80

To close all the containers run: docker compose down (which will remove and
stop all containers)
Docker Registry:
When we run the command: docker run nginx we are pulling the nginx image
from a registry. But what registry?
We run the image from: nginx/nginx.
Where the first part is the user/account name. And the second part is the
image/repository name.

So, if I would push an image to my account name, danielshvartz an image that


called app. I would access it using: danielshvartz/app

The images are pulled from the user account and the image name. But where
is this located? At the docker registry. At – docker.io
So, when we pull an image. It pulls it from: docker.io/nginx/nginx
Where the first part is the registry, the second part is the user and the third is
the image name.

Of course, we can also pull other images from different registries. Like google
registry at: gcr.io

We can also create a private registry using one of the cloud providers.
To access a private registry, we need to use the docker login command.
Docker login XXX.XXX.XXX.XXX
And then we can pull the image using:
Docker run private-registry.io/apps/app

To create a private registry, we can use docker registry image locally, which
publishes an API at port 5000.
Docker run –d –p 5000:5000 - - name=registry registry:2

To push image to the private registry, we need to push it to the API exposed at
port 5000.
Docker image tag my-image localhost:5000/my-image
Here we don't specify a URL, but we do specify the localhost address at port
5000.
Then, push the image using docker push localhost:5000/my-image

To pull the image we can either do it from localhost, if we are in the same
network.
Or, using the IP address if the host. Like XXX.XXX.XXX.XXX:5000

To view push images to registry use the:


curl -X GET localhost:5000/v2/_catalog
Which creates a get request to the API and returns the pushed images.
Limitation Of Resources:
Docker can limit the amount of CPU/memory a docker container and its
processes can take.

We can limit the amount of CPUs the container can us by using:


Docker run - -cpus=.5 ubuntu.
This way we limit the usage of the container to 50% of the CPU.

We can limit the amount of memory used by a container using:


Docker run - -memory=100m ubuntu
This way we limit the usage of 100MB of ram for the container.

To view the processes running inside a container we would use:


Docker exec container_id ps –eaf
To view the disk consumption by docker use
Docker system df –v
Networking in Docker:
When we install docker, we have 3 types of networks.
1. Bridge – this is the default network, in this network, which is a private
internal network, all the containers get an internal IP and can access each
other. To access a container, we map its internal port with the external port of
the host.
2. None – this option disables containers network, meaning containers are now
each an isolated network, so they cannot communicate with each other.
3. Host – on this network, there is no isolation between the containers
network and the hosts' network. This option minimizes the use of containers
network, thus making the performance better.
To choose between networks when we run a container we add - - network={}
And the name of the network.

We can create our own network using:


Docker network create - - driver=bridge - - subnet=182.16.0.0/16 custom-
isolated-network
Where in the first part we tell docker to which driver should he create the
network on, the second part is the subnet range, and the third part is the name
of the network.

To list all the network we would use:


Docker network ls

To view the network configuration for a container run the


Docker inspect docker_id | grep –A 10 “Networks”

Docker has a built-in DNS server that manages container names into the IP
addresses. So, if a container wants to access another container, he can do it
with its name instead of its IP.
Orchestration with docker:
In the current state, if we run an instance of an application, it can handle users,
but what if the application is too loaded and unable to handle many users? In
that case we need to add another instance of the application, and so on.
We need also to keep an eye on the load, the performance, and do that all
manually.
So currently we need to keep a watch on the applications, the containers, even
the host, to see if he crashes or not. So, we do all that manually, and if
something crashes, we need to restart it manually.

Thats where container orchestration comes. It is a set of tools and scripts that
help manage and deploy containers in a large environment.
This solution can manage a few docker hosts and create 100 containers with a
single command line. So, it is very easy to manage the environment.
Some solutions can scale up/down the amount of hosts/containers/load
balancers etc.

Cluster = few docker hosts

Docker swarm:
Docker Swarm is a native clustering and orchestration tool for Docker
containers that enables the deployment and management of containerized
applications across a cluster of machines.

At docker swarm, we have the swarm manager, which manages all the other
containers. The other hosts, called, workers or nodes.
We just initialize a manager, and then each worker joins the swarm.

When we deploy an application, we would want to use a service.


A service allows us to run an image on a few hosts.
So, when we create a service, we can create a container X times and docker
will distribute the container on different hosts.
The command – docker service create is like docker run as we can add
publishing ports, environment variables and networks.

Kubernetes:
Kubernetes, often abbreviated as K8s, is a powerful container orchestration
platform that provides a highly flexible and scalable solution for managing
containerized applications in cloud, on-premises, or hybrid environments.

It automates the deployment, scaling, and operation of application containers,


allowing developers to focus on writing code without worrying about the
underlying infrastructure complexities. (no need to worry about if container
fails or do to if there is load, the orchestration does this for us)

With features like automatic load balancing, self-healing, and rolling updates,
Kubernetes ensures that applications are highly available, resilient, and
efficient, making it the de facto standard for container orchestration in modern
cloud-native architectures.

Like docker swarm, we have a worker which runs the containers, and a master,
which controls the workers.
We have a node, which is a host, and a cluster which is built from few nodes to
ensure redundancy.

We also have different components which build kubernetes but we are not
going to explain them.

You might also like