Professional Documents
Culture Documents
Note: we can access the app using the ip of the container, but this is only
internal ips. (to find out internal ip of a container, use the docker inspect
command, YOU SHOULD NOT ACCESS A CONTAINER WITH ITS INTERNAL UP,
ONLY BY USING PORT MAPPING.)
To access the app from external network, we need to map the port we listen
on and access the ip of the docker host.
To connect to the container access using localhost:port or through the docker
desktop
we can map as many ports as we’d like, but we can’t listen two containers on
the same port.
Detaching a container:
Docker run –d app_name – will run the container in detached mode, allowing
us to use the console.
When a container runs without –d, it will run in attached mode, meaning that
you cannot have access to the console.
To attach back to the container, run:
Docker attach container_id – to attach back to the container.
:Other commands
Docker ps – will list all the running containers and some basic information
about them.
Docker ps –a – Will list all the containers, those that are running, and those
that are not.
Docker stop Container_ID or docker stop Container_Name – Will stop the
container
we can also stop a container using ctrl+c
Docker rm container_name or docker rm container_id – will remove the
container completlly and we would need to download it again from the image.
Docker images – will show us all the container images.
Docker rmi image_name - will remove the image from our host
Docker rmi image_name:tag – will remove the image with the related tag.
Docker pull image_name - will only download the image.
Docker exec container_name command – will allow us to run a command on a
running instance
Docker attach container_id – will allow us to attach the container back (we
need to provide only the first chars of the container id)
Docker inspect container_name: will provide additional data regarding a
container.
Docker logs container_name: will show us the logs of the containers
.Docker search image_name – searches for a image in the hub
Docker Image:
To build a docker image (template), we need to write a dockerfile.
A dockerfile is a file that has instructions the builds the docker image.
When building a dockerfile we need to specify:
1. A starting base OS for the image – this is a must – on this OS the
container will run
2. Dependencies (This can be updating repository using apt, installing
dependencies using apt, and installing python using pip)
3. In case of code – copying source code to opt folder – this way it wont be
saved on the container volume and copied to a permanent folder.
4. Specifying Entrypoint – which command should run when we run the
container
After these build the image using the command – docker build
like so:
docker build . -f DockerFile –t image_name\tag
docker build directory_path –f DockerFileName –t image_name:tag
This will build the image locally only, if we want to push it to the registry
(dockerhub) we need to push it with:
docker push image_name\tag - this needs to be the same name we created
the image
The docker file is built using instructions and arguments for those instructions.
Each instruction is an operation docker is doing to build our image.
The docker file must start with a from command
When the dockerfile runs each line is run as a layer.
Example for a dockerfile:
FROM ubuntu -> this defines the base OS for the container
RUN apt-get update && apt-get install –y python3 python3-pip ->
this line runs commands on the base image.
RUN pip3 install flask flask-mysql -> to install dependencies for python
COPY app.py /opt/app.py -> this line defines that we will copy the
source code in the current folder to the /opt/source-code folder of the image
EXPOSE 5000 -> This line is just for documentation to note that we are
using the port 5000 for the flask app
ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run –host=0.0.0.0 --
port=8080 -> entrypoint means that this line will be run after we run the
container, which means it will go to /opt/source-code/app.py and run the
app.py file with flask run and the host will be us (local) on port 8080
Note that if we can also run a container and run an app manually, but then we
must do it each time. 1 then the app will run. But this is a headache and not a
good solution.
We can set the color in the image, and then if we update the image, we just
pull it again and run. Or we can change it when we run the container like so:
Docker run –e APP_COLOR=blue image_name
the –e sets Environment Variables.
Then we can run different container with different environment variables
To find the Environment Variables, we run docker inspect and then we can see
at the config->env. There we can see all the variables.
I like to find it like so:
docker inspect XXX | grep –A 5 “Env”
This will find the Env word and 5 lines after, giving you the vars.
Command vs Entrypoint:
The CMD instruction in the dockerfile is a short for command, which means
that when we run the container, the arguments that will come after will be
run.
The difference between Command and Entrypoint is that command can be
overwritten, while the entrypoint cannot.
When we use an EntryPoint, what we will write at the ‘docker run’ command ,
will be appended at the end of the entrypoint.
So if we write ENTRYPOINT [“sleep”] and then declare
Docker run image 10 -> it will take the 10 and append it to sleep
So we will get sleep 10, and it will sleep for 10 seconds
FROM ubuntu
ENTRYPOINT [“sleep”] -> the sleep wont be changed
CMD [“10”] -> the 10 is a default argument
We can also override the entrypoint by using –entrypoint when we run the
container like so
Docker run –entrypoint sleep2.0 image 10
Data Persistence in Docker:
The data in a docker container is Ephemeral file system. This means that when
a container dies/terminated/stops, all of its data will be deleted. This means
that if we have for example a database on a container. If the containers is
terminated, all of its database will be deleted.
To overcome this, we can use 2 volumes that can hold the data (that is why it is
called data persistence), using either named volume, or bind mount volume,
which will save the data if the container dies.
:Named volume
The named volume is the preferred way to save data with containers, we just
give the folder a name and Docker chooses where to store the data. In this
way, different containers can access the same name volume.
When using a named volume, Docker creates a new folder, and managed the
folder content for us.
Usage:
Docker volume create my_volume
To inspect to volume -> docker volume inspect my_volume
Docker run –v my_volume:/opt/app_files ubuntu
This way the my_volume folder will have /opt/app_files
We can also make another container access the same volume
Docker run –v my_volume:/opt/app_files nginx
If we were to run all the containers manually, we would need to link them
together.
Like frontend to backend, and DB.
To link two containers, we use the - - link in the docker run command.
This takes a container and set on his IP list the IP of the linked container, giving
him access to the linked container.
The - - link will be added like so:
- - link name_of_container:name_of_host_in_code
The name of the container is the name that we gave him when we run the
container. And the name of the host is as mentioned in a code.
So, if we want to create a container named vote, that has an image of voting-
app, and that voting_app image is not pulled, that receives on port 5000 on the
host to port 80 on the container. And linked to a redis conainter
we would do it like so:
vote:
build: ./vote_folder
Ports:
- 5000:80
Links:
- Redis
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: postgres
vote:
image: voting-app
ports:
- 5000:80
links:
- redis
worker:
image: worker-app
links:
- redis
- db
result:
image: result-app
ports:
- 5001:80
links:
- db
Then we changed the version of the file to V3 and removed links:
version: "3"
services:
redis:
image: redis
db:
image: postgres:15-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
vote:
image: voting-app
ports:
- 5000:80
worker:
image: worker-app
result:
image: result-app
ports:
- 5001:80
To close all the containers run: docker compose down (which will remove and
stop all containers)
Docker Registry:
When we run the command: docker run nginx we are pulling the nginx image
from a registry. But what registry?
We run the image from: nginx/nginx.
Where the first part is the user/account name. And the second part is the
image/repository name.
The images are pulled from the user account and the image name. But where
is this located? At the docker registry. At – docker.io
So, when we pull an image. It pulls it from: docker.io/nginx/nginx
Where the first part is the registry, the second part is the user and the third is
the image name.
Of course, we can also pull other images from different registries. Like google
registry at: gcr.io
We can also create a private registry using one of the cloud providers.
To access a private registry, we need to use the docker login command.
Docker login XXX.XXX.XXX.XXX
And then we can pull the image using:
Docker run private-registry.io/apps/app
To create a private registry, we can use docker registry image locally, which
publishes an API at port 5000.
Docker run –d –p 5000:5000 - - name=registry registry:2
To push image to the private registry, we need to push it to the API exposed at
port 5000.
Docker image tag my-image localhost:5000/my-image
Here we don't specify a URL, but we do specify the localhost address at port
5000.
Then, push the image using docker push localhost:5000/my-image
To pull the image we can either do it from localhost, if we are in the same
network.
Or, using the IP address if the host. Like XXX.XXX.XXX.XXX:5000
Docker has a built-in DNS server that manages container names into the IP
addresses. So, if a container wants to access another container, he can do it
with its name instead of its IP.
Orchestration with docker:
In the current state, if we run an instance of an application, it can handle users,
but what if the application is too loaded and unable to handle many users? In
that case we need to add another instance of the application, and so on.
We need also to keep an eye on the load, the performance, and do that all
manually.
So currently we need to keep a watch on the applications, the containers, even
the host, to see if he crashes or not. So, we do all that manually, and if
something crashes, we need to restart it manually.
Thats where container orchestration comes. It is a set of tools and scripts that
help manage and deploy containers in a large environment.
This solution can manage a few docker hosts and create 100 containers with a
single command line. So, it is very easy to manage the environment.
Some solutions can scale up/down the amount of hosts/containers/load
balancers etc.
Docker swarm:
Docker Swarm is a native clustering and orchestration tool for Docker
containers that enables the deployment and management of containerized
applications across a cluster of machines.
At docker swarm, we have the swarm manager, which manages all the other
containers. The other hosts, called, workers or nodes.
We just initialize a manager, and then each worker joins the swarm.
Kubernetes:
Kubernetes, often abbreviated as K8s, is a powerful container orchestration
platform that provides a highly flexible and scalable solution for managing
containerized applications in cloud, on-premises, or hybrid environments.
With features like automatic load balancing, self-healing, and rolling updates,
Kubernetes ensures that applications are highly available, resilient, and
efficient, making it the de facto standard for container orchestration in modern
cloud-native architectures.
Like docker swarm, we have a worker which runs the containers, and a master,
which controls the workers.
We have a node, which is a host, and a cluster which is built from few nodes to
ensure redundancy.
We also have different components which build kubernetes but we are not
going to explain them.