You are on page 1of 40

On Premise vs.

Cloud

Cloud computing has grown very popular for enterprises, everything from
saving time and money to improving agility and scalability. On the other hand, on-
premise software – installed on a company’s own servers and behind its firewall –
was the only offering for organizations for a long time and may continue to adequately
serve your business needs. Additionally, on-premise applications are reliable, secure,
and allow enterprises to maintain a level of control that the cloud often cannot.

On-premise software requires that an enterprise purchases a license or a copy


of the software to use it. The entire instance of software resides within an
organization’s premises, there is generally greater protection than with a cloud
computing infrastructure. So, if a company needs all this extra security, why would
they dip its proverbial toes into the cloud? The downside of on-premise environments
is that costs associated with managing and maintaining all the solution entails can run
exponentially higher than a cloud computing environment. An on-premise setup
requires in-house server hardware, software licenses, integration capabilities, and IT
employees on hand to support and manage potential issues that may arise.

In a cloud environment, a third-party provider hosts everything for you. This


allows companies to pay on an as-needed basis and effectively scale up or down
depending on overall usage, user requirements, and the growth of a company.

Key differences also include:

1. Deployment
2. Cost
3. Control
4. Security
5. Compliance
IAAS, PAAS, SAAS:
Basics

Virtualization: Virtualization enables you to run multiple operating systems on the


hardware of a single physical server. The industry standards today is to use Virtual
Machines (VMs) to run software applications. VMs run applications inside a guest
Operating System, which runs on virtual hardware powered by the server’s host OS.
VMs are great at providing full process isolation for applications: there are very few
ways a problem in the host operating system can affect the software running in the
guest operating system, and vice-versa. But this isolation comes at great cost — the
computational overhead spent virtualizing hardware for a guest OS to use is
substantial.

Containerization: Containerization involves the packaging of software code and all


related dependencies for running uniformly without any issues on any infrastructure.
Containerization is generally assumed as a supporting element or alternative for
virtualization.

Virtual machines (VMs): are an abstraction of physical hardware turning one server
into many servers. The hypervisor allows multiple VMs to run on a single machine.
Each VM includes a full copy of an operating system, the application, necessary
binaries and libraries - taking up tens of GBs. VMs can also be slow to boot.

Container: A container is a standard unit of software that packages up code and all its
dependencies so the application runs quickly and reliably from one computing
environment to another. Containers take up less space than VMs (container images are
typically tens of MBs in size), can handle more applications and require fewer VMs
and Operating systems.

Traditional methods involved code development in a particular computing


environment, and transfer of code development to a new environment resulted in
errors and bugs. Containerization deals with this problem through bundling of the
application code with configuration files, dependencies, and libraries for running it.
Containers virtualize the operating system instead of hardware. Containers are more
portable and efficient.

Compatibility issues & computational overhead gave rise to the origin of containers.

Dockers (platform-as-a-service): Docker is a container management service and an


open-source software designed to facilitate and simplify application development. It is
a set of platform-as-a-service products that create isolated virtualized environments
for building, deploying, and testing applications. There are various types of
containers LXC, LXD, LXCFS etc. Docker uses LXC containers. We cannot run
windows based container in a Linux environment.

Docker is a tool that allows developers, sys-admins etc. to easily deploy their
applications in a sandbox (called containers) to run on the host operating system. The
key benefit of Docker is that it allows users to package an application with all of its
dependencies into a standardized unit for software development. Unlike virtual
machines, containers do not have high overhead and hence enable more efficient
usage of the underlying system and resources.
Docker Images v/s Containers

Docker Image: The concept of Images and Container is like class and object in
which object is instance of class and class is blue print of object. Images are different
in Virtual Machines and Docker, in virtual machines images are just snapshots of
running virtual machine at different point of times but Docker images are little bit
different from them and most important and major difference is that docker images
are immutable that is they cannot be changed.

In the real-world, it happens a lot that a software works on one computer but it
does not works on others due to different environments, this issue is completely
solved by docker images and using this, application will work same on everyone’s PC.
Every developer on a team will have exact same development instance. Each testing
instance is exactly same as development instance. Your production instance is exactly
same as testing instance. Also Developers around World can share their Docker
Images on a Platform called Docker HUB.

Docker Container: They are actually Docker Virtual Machine but commonly called
as Docker Containers. If a Docker image is a map of house, then Docker Container is
actual build house or in other words we can call it as instance of image. As per official
website, containers are runnable instance of an image. You can create, start, stop,
move, or delete a container using Docker API or CLI. You can connect a container to
one or more networks, attach storage to it, or even create a new image based on its
current state.

Sometimes we have to run multiple versions of redis software that is not possible on
local machine or virtual machine. And some version of redis requires “Java x.y” and
other version requires “java x.z” so we have multiple dependencies. In that case
Docker comes for help, it takes care all these dependencies and we can run multiple
versions of the same software hassle-freely. We can run multiple instances if the same
version of the software using docker and they are completely isolated.
S.NO Docker Image Docker Container

It is Blueprint of the
1 Container. It is instance of the Image.

Image is created only Containers are created any


2 once. number of times using image.

Containers changes only if old


image is deleted and new is
3 Images are immutable. used to build the container.

Images does not require Containers requires computing


computing resource to resources to run as they run as
4 work. Docker Virtual Machine.

To make a docker image, To make container from image,


you have to write script you have to run “docker build .”
5 in Dockerfile. command

It makes no sense in sharing a


Images can be shared on running entity, always docker
6 Docker Hub. images are shared.

Docker Architecture

Docker Daemon (server) - The background service running on the host that manages
building, running and distributing Docker containers. The daemon is the process that
runs in the operating system which clients talk to.

Docker Client - The command line tool that allows the user to interact with the
daemon (server). More generally, there can be other forms of clients too - such as
Kitematic which provide a GUI to the users.

Docker Hub - A registry of Docker images. You can think of the registry as a
directory of all available Docker images. If required, one can host their own Docker
registries and can use them for pulling images.
Examples and Basic Commands:

docker run = docker create + docker start

 docker create <image-name>  creates an image and provides the container id


of the image
 docker run <container-id>  a container will be created from the image, but
the output will not be displayed. We can use this if we are not interested to see the
output.
 docker run –a <container-id>  a container will be created of the image and
the output will be displayed. We can use this if we are interested to see the output.

Difference between above two commands can be differentiated using hello-world


image.

 docker pull <image-name>

This command pulls an image, it Checks for the availability of the image in the
local PC if available then it pulls that image to Docker Engine else it downloads
from Docker Hub Online repository.

 docker images

Lists the images available in the local along with other information.
 docker run <image-name>

Runs that image.


Ex:
docker run busybox  just executes the busybox image.
docker run busybox echo hello  this provides input “hello” to “echo”
command inside the busybox utility image.
Note: BusyBox combines tiny versions of many common UNIX utilities into a
single small executable. It is a mini-Linux utility.

The Docker client dutifully ran the echo command in our busybox container and
then exited it. If you've noticed, all of that happened pretty quickly. Imagine
booting up a virtual machine, running a command and then killing it. Now you
know why they say containers are fast!

 Docker start –a <container-d>

If we want to use the same container (if not deleted after its previous usage ) then
we need to run the docker again with image name again as it pulls & starts a new
instance of the image, rather we can use the above command and use the same
container (it just re-runs the container, we cannot alter the inputs).

 docker ps

The docker ps command shows you all containers that are currently running. Best
example to witness this is: open one terminal and give the below command later
open another terminal and type docker ps, where you can see the running
instances of docker container.

 docker ps -a

What we see above is a list of all containers that we ran. Do notice that the
STATUS column shows that these containers exited a few minutes ago.

Activity: If you're feeling particularly adventurous you can try rm -rf bin in the
container. Make sure you run this command in the container and not in your
laptop/desktop. Doing this will make any other commands like ls, uptime not
work. Once everything stops working, you can exit the container (type exit and
press Enter) and then start it up again with the docker run -it busybox sh
command. Since Docker creates a new container every time, everything should
start working again. That’s the beauty of images and containers.

Let’s quickly talk about deleting containers. We saw above that we can still see
remnants of the container even after we've exited by running docker ps -a. You'll
run docker run multiple times and leaving stray containers will eat up disk
space. Hence, as a rule of thumb, let us clean up containers once we're done with
them. To do that, we can run the docker rm command. Just copy the container IDs
from above and paste them alongside the command.

 docker rm 305297d7a235 ff0a5c3750b9

305297d7a235
ff0a5c3750b9

On deletion, you should see the IDs echoed back to you. If you have a bunch of
containers to delete in one go, copy-pasting IDs can be tedious. In that case, you
can simply run
 docker rm (docker ps -a -q -f status=exited)

This command deletes all containers that have a status of exited. The -q flag, only
returns the numeric IDs and -f filters output based on conditions provided. --rm
flag that can be passed to docker run which automatically deletes the container
once it's exited from. For one off docker runs, --rm flag is very useful.

In later versions of Docker, the docker container prune command can be used to
achieve the same effect.

 docker container prune

Lastly, you can also delete images that you no longer need by running docker rmi.

 docker rmi <image-ID>

Get the image ID from the “docker images” command.

 Docker exec –it <container-id> <commands>

-it  for interaction.


For few applications we have to start server first then we have to connect/interact
with it from CLI or API or console. One such example is redis DB, here we have
to run Redis-server in one terminal then we have to connect to that terminal from
one more terminal using Redis-CLI.
Docker exec –it <container-id> redis-cli

 Docker run redis  this will run the redis-server in one terminal.

Now to connect to that server instance from redis cli we have to open one more
terminal then type the below command after getting the container-id of the server
instance.

 Docker exec –it <redis-sever container-id> redis-cli


Now you can save/get/do any operations of DB.

Rather than giving commands every time using the exec we can get access to that
shell.

 Docker exec –it <redis-sever container-id> sh

This will give access to shell, where I can execute any number of commands.

To run more than one command in a container.

 docker run -it busybox sh

Running the run command with the -it flags attaches us to an interactive terminal
in the container. Now we can run as many commands in the container as we want

(Redis is a fast caching database.)

Observe the difference between above two commands. We can access shell using
“exec” or “run” command. The difference between “docker run” and “docker
exec” is that “docker exec” executes a command on a running container. On the
other hand, “docker run” creates a temporary container, executes the command in
it and stops the container when it is done.

Docker images are completely isolated: open two terminals and run “docker run
redis” in both the terminals. This will create two instances of redis-server. Open
two more terminal and get the container-ids of first two redis container and
execute following command "docker exec –it <container-id-1> sh” in 3rd
terminal "docker exec –it <container-id-2> sh” in 4th terminal.

Now get type “redis-cli” inside shell.

“Set data 100” in 3rd terminal and “get data” in 3rd and 4th terminal. You can
see that data will be printed as 100 in 3rd terminal but as ‘nil’ in 4th (data is a same
variable used in both instances).

This shows that both the instances are isolated.


Creating our Custom image:

Let us make the demo using visual studio. To do this first we have to install Visual
Studio on Linux. Run the below command in Linux terminal to install.

(https://code.visualstudio.com/docs/setup/linux)

sudo apt install snapd

sudo snap install --classic code

echo "export PATH=$PATH:/snap/bin" >> ~/.bashrc

source ~/.bashrc

code .  To open the text editor of VS code studio.

Procedure to create an image

First we will write instructions in docker file on how a container should work. The
Docker file should be given to Docker Client, Docker client then sends the file to the
Docker Server in its jargon. Then the Server does the job as per instructions. Finally
creates an image file.
Steps we follow if we want to install a software in a system. As an example consider
installing Google Chrome on PC. We follow the below steps.

In a similar way if we want to build a simple docker image we have to follow few
steps.

Docker File Commands and Procedures:

FROM  Used to Load the OS (also called base image) inside Docker File.

RUN  Used to install required software and dependencies.

CMD  Startup command for that docker file.

After creating the docker file using the above three steps, we have to create an image.

Docker build <path of docker file>


If we are in the same directory as the Docker file then simply give ‘.’ in place of
location/path

Docker build .

After that you will get image id then taking that run the image.

Docker run <image-id from previous built step>

We already know that we can directly download an image of redis and run in docker.
But here, let us try to create our own custom image of redis-server. For OS purpose
we always use the simple OS – Alpine Linux and other steps as mentioned above.
Alpine Linux is a simple and minimalistic Linux distribution. A container requires no
more than 8MB and a minimal installation to disk requires around 130MB of storage.

Create a directory in the name (as you wish) “Docker” in home directory, then one
more with name “redis” then create a Dockerfile.

mkdir Docker  creates a directory

mkdir redis  creates a directory

Code Dockerfile  Here we are using Visual Studio Code editor to write.

In that Dockerfile write the below commands. This is a simple example.

Inside a Docker File

# Use a base image (comments)


From Alpine

# Install the required dependencies and softwares.


RUN apk add --update redis
(Above is the alpine specific command, don’t worry just copy now)

# specify startup command


CMD [“redis-server”]

# Final Step (this is in Linux terminal)


Docker run <final-image-ID>
From the below screenshot we can infer that:

 Docker has pulled an Alpine OS and creates an image with the Image-ID
*********f2a in the first step.
 With this *********f2a image docker started a 1st temporary intermediate
container with container-ID *********1d7 and installs the redis-server inside that
alpine container. It will stop the 1 st temporary intermediate container and creates a
new image from that with image-ID *********ccc.
 Now it will start 2nd intermediate container *********e30 and runs the previous
image *********ccc. It will do the required steps/given steps if there are any and
stops the 2nd intermediate container and creates a final image with image-ID
*********efb.
Diagrammatic Representation of the above said procedure
Caching in Docker

If there exists an already created image for the same instructions in cache then docker
takes use of the cache and gives previous output only rather creating it again. This
boosts the performance when dealing with large/heavy software applications hence
docker is very fast compared to other platforms.
Case 1: Executing the same commands

Compare the image-IDs of the previous screenshot with this below screenshot. Here
we can see that it is running the command FROM Alpine, but it found an image
already for the same in cache so it is using the same *********f2a image. Same can
be seen with Image-IDs *********ccc and *********efb.

Case 2: Adding additional things in the same order

Let us say we need to install C compiler after redis server in the same container. Let
us add the C compiler in the docker file.

Inside a Docker File

# Use a base image (comments)


From Alpine

# Install the required dependencies and softwares.


RUN apk add --update redis
RUN apk add –update gcc
(Above are the alpine specific commands, don’t worry just copy now)

# specify startup command


CMD [“redis-server”]
Here we can observe that first two commands (FROM and RUN) are same so it used
the same previous images *********f2a and *********ccc. But after this we have
inserted the gcc compiler in the next sequence so we can see new images and
container being generated and final image being ********98e which is different from
*********efb.
Case 3: Inserting the new commands not in order (earlier than redis-server).

Here we can observe that the cache us being used only for first command (FROM)
from the next commands it is using the new image and containers as it cannot find an
image of Alpine OS installed with gcc. So it installs gcc then redis-server then creates
final image. Only *********f2a is used from cache.

So to get better performance and make use of effective caching we have to add
new commands in sequence at bottom, not at the top of Docker file.
Naming Convention for Docker Images and Containers

<Docker-Hub User-ID> / <image-name> : <version-tag>

Below is the command to give custom names while building.


Docker Build -t ExampleUserID/myDocker:latest <path>

Docker Hub User ID: Your User ID of the Docker HUB. If you are running locally
it is not mandatory. But it is good follow the conventions.

Image-Name: The name you want to give to the image. This should always be in the
lower case.

Version Tag: Use Latest for version-tag if version number is not maintained else you
can use v1, v2, v3 etc.

So if we want to run this we need not search of the image-ID always we can run using
the custom name we gave.

Names for Containers

Docker Run -d /-it --name <container-name> <image-name>

-d  detach and run in the background. Meaning: I am not interested in watching the
logs, I just want the Container to be started.

Docker Inspect <container-name/id>


Observe in the below screenshot the “names” column. The name of the container is
what we have given. So for future references we can use the name of the busybox
container not the container ID (every time that we need to get from docker ps
command)

Note: Container names have to be unique. If you want to use the same name then you
have to delete the previous container and create with the same name.
Running a web applications in Docker

Basics of Web Development

 The part of a website that user interacts with directly is termed as frontend. It is
also referred to as the ‘client side’ of the application.

Front end Languages: HTML, CSS, JavaScript

Front end Frameworks: Angular JS, React.js, jQuery, SASS, Ember.js etc.

 Backend is server side of the website. It stores and arranges data, and also makes
sure everything on the client-side of the website works fine. It is the part of the
website that you cannot see and interact with.
Backend languages: PHP, C++, Python, Java, JavaScript (both frontend and
backend)
Backend Frameworks: Express, Django, Rails, Laravel, Spring, etc.

To understand the difference between Language and Framework, think of the


popular sport Cricket. If Cricket is the language, various formats such as Twenty-
20, One Day International and Test Series are the frameworks. A framework is a
collection of useful tools written for a particular programming language, it is like
libraries.

Java is a language and spring is one of the popular frameworks used to create a
Java web application. Similarly, we have Python with Django, Ruby on Rails,
etc.

 JavaScript: JavaScript allows developers to use a single language on both the


server-side and the Client-Side. Node JS is a Java Runtime Environment (JRE)
tool. A JRE is a software that is made to execute other software, it is not a
programming language or framework. But angular is a front-end framework of
JS.
 Databases for Web development: ORACLE, MYSQL, MICROSOFT SQL,
POSTGRESQL, MONGODB.
A simple Hello world Program is being run using the Node JS application.

Suppose let us say we have to we have to run node JS application in Docker, there are
two ways:

1. You can download the Alpine base image and install node JS in the
dependencies section of the docker file and later run it. [First we have to install
npm (node package manager) then we have to start npm to run a Node JS
project. ‘Npm install’ command installs all dependent packages.

#base image
FROM alpine

#install dependencies
RUN apk add --update nodejs npm 
#[installing node JS on Alpine in the above step]
RUN npm install

#startup commands
CMD ["npm","start"] 
#[commmand is "npm start" but if there is a space between words in comm
and we have to spearate them by giving comma]

2. Else you can directly download the Node JS installed on Alpine image and just

run the Node JS project. These images are released from Node JS officially.
You can install images from www.hub.docker.com

#base image
FROM node:alpine
# [node is the image name and alpine is the version]

WORKDIR /usr/app
# [use any name for your app like myapp, first, firstapp etc.

COPY ./ ./
# [to copy the code from local to alpine image]

#install dependencies
RUN npm install

#startup commands
CMD ["npm","start"] 
#[commmand is "npm start" but if there is a space between words in comm
and we have to spearate them by giving comma.]

WORKDIR: WORKDIR instruction is used to set the working directory for all the
subsequent Dockerfile instructions. Some frequently used instructions in
a Dockerfile are RUN, ADD, CMD, ENTRYPOINT, and COPY. If the WORKDIR is not
manually created, it gets created automatically during the processing of the
instructions.

In Docker, there are two ways to copy a file, namely, ADD and COPY. Though there
is a slight difference between them in regard to the scope of the functions, they more
or less perform the same task.

COPY: takes in a src and destination. It only lets you copy in a local file or directory
from your host (the machine building the Docker image) into the Docker image itself.

ADD: lets you do that too, but it also supports 2 other sources. First, you can use a
URL instead of a local file / directory. Secondly, you can extract a tar file from the
source directly into the destination. A valid use case for ADD is when you want to
extract a local tar file into a specific directory in your Docker image.

Other commands include ADD, COPY, ENV, EXPOSE, FROM, LABEL,


STOPSIGNAL, USER, VOLUME, WORKDIR, and ONBUILD. Please refer
documentation for detail.
Networking in Docker

Individual containers communicate with each other through a network to perform the
required actions, and this is nothing but Docker Networking. So, you can define
Docker Networking as a communication passage through which all the isolated
containers communicate with each other in various situations to perform the required
actions.

Goals of Docker Networking

 Flexibility – Docker provides flexibility by enabling any number of applications


on various platforms to communicate with each other.
 Cross-Platform – Docker can be easily used in cross-platform which works across
various servers with the help of Docker Swarm Clusters.
 Scalability – Docker is a fully distributed network, which enables applications to
grow and scale individually while ensuring performance.
 Decentralized – Docker uses a decentralized network, which enables the
capability to have the applications spread and highly available. In the event that a
container or a host is suddenly missing from your pool of resource, you can either
bring up an additional resource or pass over to services that are still available.
 User–Friendly – Docker makes it easy to automate the deployment of services,
making them easy to use in day-to-day life.
 Support – Docker offers out-of-the-box supports. So, the ability to use Docker
Enterprise Edition and get all of the functionality very easy and straightforward,
makes Docker platform to be very easy to be used.

To enable the above goals, you need something known as the Container Network
Model.

Container Network Model (CNM)

Libnetwork is an open source Docker library which implements all of the key
concepts that make up the CNM.

 IPAM (IP Address Management) are used to create/delete address pools and
allocate/deallocate container IP addresses.
 Network Drivers: Docker’s networking subsystem is pluggable using drivers.
Below are details of Docker networking drivers:
 Bridge: The default network driver. Bridge networks are usually used when
your applications run in standalone containers that need to communicate.
Host: Here container’s network stack is not isolated from the Docker host
(the container shares the host’s networking namespace), and the container
does not get its own IP-address allocated. For instance, if you run a
container which binds to port 80 and you use host networking, the
container’s application is available on port 80 on the host’s IP address.

 Overlay: Overlay networks connect multiple Docker daemons together and


enable swarm services to communicate with each other.

 MacVLAN: If you want to be directly connected to the physical network


you can use the macvlan network driver to assign a MAC address to each
container’s virtual network interface, making it appear to be a physical
network interface directly connected to the physical network.
 None: For this container, disable all networking. Usually used in
conjunction with a custom network driver. None is not available for swarm
services.

Every container has a virtual NIC. By default bridge is responsible for all IP
communications and addressing. Command to check container IP-Address.

Grep (linux) /
<container- <IP-address / the field that you
Docker Inspect | Select-string “IPAddress”
name/id> want to search>
(windows cmd)

Usage of Select-string command and grep commands.


Types of Communication:

1. Container to container commination in same network can be done directly. You


can use ping command to check. Rub two containers and ping from one to another
and you can check the response.

If the network driver is default then all containers can communicate with one-another.

2. Private network of containers


a) Creating custom network

docker network create <network-name>


b) Inspecting the created network

docker network inspect <network-name>


c) List all networks

docker network ls
d) Remove a network

docker network rm <network-name>


e) Attaching containers to that network:
If the container is still not running then at the start only you can assign a
container to the network.

<container- <network- <image-


Docker Run --name --network
Name> name> Name>
Else if the container is already then also you can attach to the network

<container-
docker network connect <network-name>
Name>
To disconnect it later use the below command.

<container-
docker network disconnect <network-name>
Name>

There are 4 containers busybox1, busybox2, busybox3 and busybox4. But only first
three are in busyNetwork and busybox4 is not in the network.

We have 3 busybox running in same network. First screenshot shows that containers
in same network can communicate but they cannot communicate with outside
container (busybox4).

Communication within same network (successfully communicating).

Communicating container with outside container (failure).


Note:

 This way we can create an isolation of containers.


 We can address the containers in the same network with the container also.
Because if you restart a container or network then the IP address may change in
that case it is better address them with names rather than with IPs (service
discovery suing names not using IPs).
 If you want to remove a network first you have to remove the containers inside it
then you have to delete/remove a network.

Web Application that counts number of visitor

index.js package.js on Dockerfile

Download the code from the above files.

(a) In Local Machine: Here the application is dynamic, hence requires DB. For that
purpose we are using a redis-database in Node JS application. You can go through
the code and understand.

Start the redis-server in local (by default it starts at 6739 port) before building the
application code and then build and execute the Application in local using below
commands.
Go to link localhost:9999 (as we are using port 9999 for listening) there you
can access the application web page and see the visitors count.

This is the procedure of running a dynamic DB in local machine.

Follow the below steps for running the Application in Dockers.

(b) In Docker: Here we have to start the redis server container first as it has the
dependencies in the application code then download the nodeJS image and run the
same downloaded code in the NodeJs container. Then map the Nodejs and Redis
containers using the networking concepts of docker. (In the IP address section
don’t give localhost for redis server, give the container name or real time IP of
redis-server)
Flowchart to build a dynamic web application.

[docker network create visitor-network]


[docker run -it -p 6379:6379 --name my-redis-server --network visitor-network redis]

Docker File

#base image 
FROM node:alpine

#dependencies

WORKDIR /usr/visitorCount

COPY ./package.json ./
#we dont chnage this file and this fie effects the performance so copy it sepe
rately
COPY ./ ./

#startup command
CMD ["npm", "start"]
Building the docker file where we are giving the image name as visitercount.

[docker build -t visitorcount .]

[docker run -it -p 9999:9999 --network visitor-network visitorcount]

Note: Networking part is very important as it is the one that connects redis to node.
Here node js requires redis to be stated before node so we started the redis first then
added it to a network then ran the node JS application then added to that network.
Docker Compose

Compose is a tool for defining and running multi-container Docker applications. With
Compose, you use a YAML file to configure your application’s services. Then, with a
single command, you create and start all the services from your configuration.

Compose has commands for managing the whole lifecycle of your application:

 Start, stop, and rebuild services


 View the status of running services
 Stream the log output of running services
 Run a one-off command on a service
 Services means the images that you want to compose.
 We can do port mapping for mapping to host machine ports or use the expose
service available in docker that make ports available inside the docker only
[understand the difference between expose and port mapping]

In the previous web application exercise we created two container and used
networking to establish connection with them, so it was kind of tedious task to
understand which has to start first. Here comes the Docker Compose which helps
solving that problem.
Let us do the same exercise using the docker compose. Download the same code
create a new DOCKER COMPOSE YML file.

docker-compos e.yml

Docker-compose up  just starts the container

Docker-compose up --build  builds the code then starts the container

Docker restart Container-ID  restarts the container

Docker-compose down  shuts down the container


Volumes

Volumes are the preferred mechanism for persisting data generated by and used by
Docker containers. Volumes are completely managed by Docker engine. In addition,
volumes are often a better choice than persisting data in a container’s writable layer,
because a volume does not increase the size of the containers using it, and the
volume’s contents exist outside the lifecycle of a given container.

If your container generates non-persistent state data, consider using a tmpfs mount to
avoid storing the data anywhere permanently, and to increase the container’s
performance by avoiding writing into the container’s writable layer.

Applications of Volumes:

 For quicker development: In development environment we make code changes


very often. As a standard measure for development and for proper testing we have
to build and deploy the code even after making a small change. But for quicker
development we can use volumes where we can directly edit the code and no need
to build and deploy it as we are making changes to the deployed code.
We have two containers in the above web application.

Suppose we have to make changes only in the Visitor app JS code not in redis
server code and if we do a “docker-compose down” and make changes and
“docker-compose up --build" then both the containers of the Docker-Compose
file will restart, but restarting the redis was not required, hence in this case we can
use Volumes, where we declare the source code as below and restart only the
required portion (node js portion in this case).
This feature best works in Linux systems, if you are using docker in windows this
might not work as expected.

 Migration: During the code changes


Suppose assume that we have a database of X.Y version and we need to either
upgrade or downgrade the database then if we down the docker and update the
given version then the old data stored will be lost. To prevent this we need to
declare VOLUMES under the database section of docker-compose.yml file,
which creates a dump (rdb file or backup) file then after upgrading or downgrading
to the newer version the new database server restores the backed-up files. If
persistent is enables then data is stores in VOLUME /data folder
All Commands

Docker create <image-name>


Docker start <container-id>
Docker start –a <container-d>
Docker logs <container-id>
Docker run –a <image-name>
Docker stop <container-id> Exit after the execution is complete or
(can be easily understood using else it will wait for 10 seconds then it will
busybox kill command) kill the automatically.
Docker kill <container-id> Without being looking at the state of the
(can be easily understood using application it abruptly stops the container.
busybox kill command)
Docker pull <image-name>
Docker images
Docker run <image-name> <commands
to image> <inputs to image>
Docker ps
Docker ps –a
Docker search <image-name>
Docker exec –it <container-id>
<commands>
Docker build <image-name>

Error Log:

1. /usr/local/bin/docker-entrypoint.sh: 16: exec: 6.0.10: not found unable to run


redis server (windows machine)
https://stackoverflow.com/questions/38905135/why-wont-my-docker-
entrypoint-sh-execute
https://stackoverflow.com/questions/10418975/how-to-change-line-ending-
settings
Sources:
https://docker-curriculum.com/#webapps-with-docker

You might also like