You are on page 1of 27

Virtualizat

ion
Virtualization is a technology that enables the creation of multiple
virtual versions of hardware and software resources, such as servers,
operating systems, and storage devices, on a single physical
machine. It allows users to run multiple operating systems and
applications on a single physical machine, without having to
purchase additional hardware or software resources.

How Virtualization Works


Virtualization works by using a software layer, called a hypervisor, to
manage the allocation of physical resources to virtual machines. The
hypervisor acts as a mediator between the virtual machines and the
physical resources, allocating resources such as CPU, memory, and
storage to each virtual machine as needed. Each virtual machine
operates as if it were a separate physical machine, with its own
operating system, applications, and network connectivity.

Types of virtualization
Server Virtualization

Server virtualization is the most common type of virtualization, and


involves partitioning a physical server into multiple virtual servers.
Each virtual server operates as if it were a separate physical server,
with its own operating system, applications, and network
connectivity. Server virtualization can help organizations reduce
hardware costs, improve resource utilization, and increase flexibility.

Desktop Virtualization

Desktop virtualization involves creating virtual desktops on a server,


which can be accessed by end-users from their devices. Desktop
virtualization can help organizations simplify desktop management,
increase security, and reduce costs, by allowing users to access
virtual desktops from any device, anywhere.

Storage Virtualization

Storage virtualization involves pooling multiple physical storage


devices, such as hard drives and solid-state drives (SSDs), into a
single virtual storage device. This virtual storage device can then be
partitioned and allocated to different virtual machines or
applications as needed. Storage virtualization can help organizations
improve storage utilization, simplify storage management, and
increase flexibility.

Network Virtualization

Network virtualization involves creating multiple virtual networks on


a physical network infrastructure. Each virtual network operates as if
it were a separate physical network, with its own network address
space, routing tables, and access control policies. Network
virtualization can help organizations improve network utilization,
simplify network management, and increase flexibility.

Hypervisor types
There are two types of hypervisors:
Type 1 Hypervisor Type 2 Hypervisor

Runs directly on the physical hardware Runs on top of an existing operating system

Provides direct access to physical hardware Uses the host operating system to access hardware
resources resources

Typically used in server virtualization scenarios Typically used in desktop virtualization scenarios

Offers higher performance and better security Offers lower performance and security than Type 1
than Type 2 hypervisors hypervisors

Examples include VMware ESXi, Microsoft Examples include Oracle VirtualBox, VMware
Hyper-V, and Citrix XenServer Workstation, and Parallels Desktop

Cloud computing and virtualization


Cloud computing relies heavily on virtualization to provide users
with scalable and cost-effective computing resources. By leveraging
virtualization, cloud providers can provide users with flexible and
dynamic computing environments that can be easily scaled up or
down to meet changing business requirements.

Some of the relevant AWS services which work using the


Virtualization technology includes

 Elastic Compute Cloud (EC2): EC2 is a web service that provides


resizable compute capacity in the cloud. EC2 uses virtualization
to provide users with a scalable and cost-effective way to run
applications on a virtual server. Users can choose from a
variety of virtual machine instances with different CPU,
memory, storage, and networking capacities, and can launch,
stop, and terminate instances as needed.
 Lambda: AWS Lambda is a serverless computing service that
allows users to run code without provisioning or managing
servers. Lambda uses virtualization to provide users with a
scalable and cost-effective way to run code in the cloud. Users
simply upload their code to Lambda, and the service takes care
of provisioning and scaling the necessary compute resources
to run the code.
 Elastic Block Store (EBS): EBS is a high-performance block
storage service that allows users to store persistent data for
use with Amazon EC2 instances. EBS uses virtualization to
provide users with a scalable and highly available way to store
data in the cloud. Users can choose from a variety of EBS
volume types with different performance characteristics, and
can attach and detach volumes from EC2 instances as needed.
 Virtual Private Cloud (VPC): VPC is a virtual network service
that allows users to provision a logically isolated section of the
AWS Cloud. VPC uses virtualization to provide users with a
secure and flexible way to launch AWS resources into a virtual
network that they define. Users can configure the network
topology, create subnets, and control inbound and outbound
traffic to and from their AWS resources.

Virtual Private Server (VPS)


A Virtual Private Server (VPS) is a virtual machine that is hosted on a
physical server, but operates as if it were a separate physical
machine. VPS hosting is a popular hosting option for individuals and
businesses that require more control and flexibility than shared
hosting, but don't want the expense and complexity of dedicated
hosting.

A VPS hosting provider typically uses virtualization technology to


partition a physical server into multiple virtual machines, each with
its own operating system, applications, and resources. Users can
typically customize the configuration of their VPS, including CPU,
RAM, storage, and bandwidth, and can install any software or
applications they require.

Examples

 Amazon Web Services (AWS) Elastic Compute Cloud (EC2)


 DigitalOcean
 Linode
 Vultr
 Google Cloud Platform (GCP) Compute Engine

AWS Lightsail
AWS Lightsail is a simplified, easy-to-use cloud computing solution
offered by Amazon Web Services (AWS). It provides users with a
pre-configured virtual private server (VPS), storage, and networking
capabilities, as well as a range of other features, all at an affordable
price point.

Features
AWS Lightsail offers a range of features to help users easily deploy
and manage their applications and websites, including:

 Pre-configured virtual private server (VPS) instances with a


range of operating systems and applications, such as
WordPress, Drupal, and Joomla.
 Integrated storage, including solid-state drives (SSD) and block
storage.
 Networking capabilities, including static IP addresses, DNS
management, and a firewall.
 Monitoring and alerting, including performance metrics and
notifications.
 Automated backups and snapshots for easy recovery in case of
data loss.

Benefits
AWS Lightsail offers several benefits to users, including:

 Ease of use: With pre-configured instances and an easy-to-use


management console, AWS Lightsail makes it easy for users to
deploy and manage their applications and websites.
 Affordability: AWS Lightsail is an affordable cloud computing
solution, with pricing starting at just a few dollars per month.
 Scalability: AWS Lightsail instances can be easily scaled up or
down to meet changing business requirements.
 Security: AWS Lightsail provides a range of security features,
including a firewall and automated backups, to help users
protect their data and applications.
 Integration with other AWS services: AWS Lightsail can be
easily integrated with other AWS services, such as Amazon S3
and Amazon RDS.

https://www.ibm.com/topics/virtualization
What is virtualization?

Virtualization is a process that allows for more efficient utilization of physical computer
hardware and is the foundation of cloud computing.

Virtualization uses software to create an abstraction layer over computer hardware that allows
the hardware elements of a single computer—processors, memory, storage and more—to be
divided into multiple virtual computers, commonly called virtual machines (VMs). Each VM
runs its own operating system (OS) and behaves like an independent computer, even though it
is running on just a portion of the actual underlying computer hardware.

It follows that virtualization enables more efficient utilization of physical computer hardware
and allows a greater return on an organization’s hardware investment.

Today, virtualization is a standard practice in enterprise IT architecture. It is also the


technology that drives cloud computing economics. Virtualization enables cloud providers to
serve users with their existing physical computer hardware; it enables cloud users to purchase
only the computing resources they need when they need it, and to scale those resources cost-
effectively as their workloads grow.

For a further overview of how virtualization works, see our video “Virtualization Explained”:

Benefits of virtualization

Virtualization brings several benefits to data center operators and service providers:

 Resource efficiency: Before virtualization, each application server required its own dedicated
physical CPU—IT staff would purchase and configure a separate server for each application
they wanted to run. (IT preferred one application and one operating system (OS) per computer
for reliability reasons.) Invariably, each physical server would be underused. In contrast,
server virtualization lets you run several applications—each on its own VM with its own OS
—on a single physical computer (typically an x86 server) without sacrificing reliability. This
enables maximum utilization of the physical hardware’s computing capacity.

 Easier management: Replacing physical computers with software-defined VMs makes it


easier to use and manage policies written in software. This allows you to create automated IT
service management workflows. For example, automated deployment and configuration tools
enable administrators to define collections of virtual machines and applications as services, in
software templates. This means that they can install those services repeatedly and consistently
without cumbersome, time-consuming. and error-prone manual setup. Admins can use
virtualization security policies to mandate certain security configurations based on the role of
the virtual machine. Policies can even increase resource efficiency by retiring unused virtual
machines to save on space and computing power.

 Minimal downtime: OS and application crashes can cause downtime and disrupt user
productivity. Admins can run multiple redundant virtual machines alongside each other and
failover between them when problems arise. Running multiple redundant physical servers is
more expensive.

 Faster provisioning: Buying, installing, and configuring hardware for each application is


time-consuming. Provided that the hardware is already in place, provisioning virtual machines
to run all your applications is significantly faster. You can even automate it using management
software and build it into existing workflows.
For a more in-depth look at the potential benefits, see "5 Benefits of Virtualization."
Solutions

Several companies offer virtualization solutions covering specific data center tasks or end
user-focused, desktop virtualization scenarios. Better-known examples include VMware,
which specializes in server, desktop, network, and storage virtualization; Citrix, which has a
niche in application virtualization but also offers server virtualization and virtual desktop
solutions; and Microsoft, whose Hyper-V virtualization solution ships with Windows and
focuses on virtual versions of server and desktop computers.
Virtual machines (VMs)

Virtual machines (VMs) are virtual environments that simulate a physical compute in
software form. They normally comprise several files containing the VM’s configuration, the
storage for the virtual hard drive, and some snapshots of the VM that preserve its state at a
particular point in time.
For a complete overview of VMs, see "What is a Virtual Machine?"
Hypervisors

A hypervisor is the software layer that coordinates VMs. It serves as an interface between the
VM and the underlying physical hardware, ensuring that each has access to the physical
resources it needs to execute. It also ensures that the VMs don’t interfere with each other by
impinging on each other’s memory space or compute cycles.

There are two types of hypervisors:

 Type 1 or “bare-metal” hypervisors interact with the underlying physical resources,


replacing the traditional operating system altogether. They most commonly appear in virtual
server scenarios.
 Type 2 hypervisors run as an application on an existing OS. Most commonly used on
endpoint devices to run alternative operating systems, they carry a performance overhead
because they must use the host OS to access and coordinate the underlying hardware
resources.
“Hypervisors: A Complete Guide” provides a comprehensive overview of everything about
hypervisors.
Types of virtualization

To this point we’ve discussed server virtualization, but many other IT infrastructure elements
can be virtualized to deliver significant advantages to IT managers (in particular) and the
enterprise as a whole. In this section, we'll cover the following types of virtualization:

 Desktop virtualization
 Network virtualization
 Storage virtualization
 Data virtualization
 Application virtualization
 Data center virtualization
 CPU virtualization
 GPU virtualization
 Linux virtualization
 Cloud virtualization
Desktop virtualization

Desktop virtualization lets you run multiple desktop operating systems, each in its own VM
on the same computer.

There are two types of desktop virtualization:

 Virtual desktop infrastructure (VDI) runs multiple desktops in VMs on a central server and
streams them to users who log in on thin client devices. In this way, VDI lets an organization
provide its users access to variety of OS's from any device, without installing OS's on any
device. See "What is Virtual Desktop Infrastructure (VDI)?" for a more in-depth explanation.
 Local desktop virtualization runs a hypervisor on a local computer, enabling the user to run
one or more additional OSs on that computer and switch from one OS to another as needed
without changing anything about the primary OS.
For more information on virtual desktops, see “Desktop-as-a-Service (DaaS).”
Network virtualization
Network virtualization uses software to create a “view” of the network that an administrator
can use to manage the network from a single console. It abstracts hardware elements and
functions (e.g., connections, switches, routers, etc.) and abstracts them into software running
on a hypervisor. The network administrator can modify and control these elements without
touching the underlying physical components, which dramatically simplifies network
management.

Types of network virtualization include software-defined networking (SDN), which


virtualizes hardware that controls network traffic routing (called the “control plane”),
and network function virtualization (NFV), which virtualizes one or more hardware
appliances that provide a specific network function (e.g., a firewall, load balancer, or traffic
analyzer), making those appliances easier to configure, provision, and manage.
Storage virtualization
Storage virtualization enables all the storage devices on the network— whether they’re
installed on individual servers or standalone storage units—to be accessed and managed as a
single storage device. Specifically, storage virtualization masses all blocks of storage into a
single shared pool from which they can be assigned to any VM on the network as needed.
Storage virtualization makes it easier to provision storage for VMs and makes maximum use
of all available storage on the network.
For a closer look at storage virtualization, check out "What is Cloud Storage?"
Data virtualization

Modern enterprises store data from multiple applications, using multiple file formats, in
multiple locations, ranging from the cloud to on-premise hardware and software systems.
Data virtualization lets any application access all of that data—irrespective of source, format,
or location.

Data virtualization tools create a software layer between the applications accessing the data
and the systems storing it. The layer translates an application’s data request or query as
needed and returns results that can span multiple systems. Data virtualization can help break
down data silos when other types of integration aren’t feasible, desirable, or affordable.

Application virtualization

Application virtualization runs application software without installing it directly on the user’s
OS. This differs from complete desktop virtualization (mentioned above) because only the
application runs in a virtual environment—the OS on the end user’s device runs as usual.
There are three types of application virtualization: 

 Local application virtualization: The entire application runs on the endpoint device but runs
in a runtime environment instead of on the native hardware.
 Application streaming: The application lives on a server which sends small components of
the software to run on the end user's device when needed.
 Server-based application virtualization The application runs entirely on a server that sends
only its user interface to the client device.
Data center virtualization

Data center virtualization abstracts most of a data center’s hardware into software, effectively
enabling an administrator to divide a single physical data center into multiple virtual data
centers for different clients.
Each client can access its own infrastructure as a service (IaaS), which would run on the same
underlying physical hardware. Virtual data centers offer an easy on-ramp into cloud-based
computing, letting a company quickly set up a complete data center environment without
purchasing infrastructure hardware.

CPU virtualization

CPU (central processing unit) virtualization is the fundamental technology that makes
hypervisors, virtual machines, and operating systems possible. It allows a single CPU to be
divided into multiple virtual CPUs for use by multiple VMs.

At first, CPU virtualization was entirely software-defined, but many of today’s processors
include extended instruction sets that support CPU virtualization, which improves VM
performance.

GPU virtualization

A GPU (graphical processing unit) is a special multi-core processor that improves overall
computing performance by taking over heavy-duty graphic or mathematical processing. GPU
virtualization lets multiple VMs use all or some of a single GPU’s processing power for faster
video, artificial intelligence (AI), and other graphic- or math-intensive applications.

 Pass-through GPUs make the entire GPU available to a single guest OS.


 Shared vGPUs divide physical GPU cores among several virtual GPUs (vGPUs) for use by
server-based VMs.
Linux virtualization

Linux includes its own hypervisor, called the kernel-based virtual machine (KVM), which
supports Intel and AMD’s virtualization processor extensions so you can create x86-based
VMs from within a Linux host OS.

As an open source OS, Linux is highly customizable. You can create VMs running versions of
Linux tailored for specific workloads or security-hardened versions for more sensitive
applications.

Cloud virtualization

As noted above, the cloud computing model depends on virtualization. By virtualizing


servers, storage, and other physical data center resources, cloud computing providers can offer
a range of services to customers, including the following: 

 Infrastructure as a service (IaaS): Virtualized server, storage, and network resources you


can configure based on their requirements.  
 Platform as a service (PaaS): Virtualized development tools, databases, and other cloud-
based services you can use to build you own cloud-based applications and solutions.
 Software as a service (SaaS): Software applications you use on the cloud. SaaS is the cloud-
based service most abstracted from the hardware.
If you’d like to learn more about these cloud service models, see our guide: “IaaS vs. PaaS vs.
SaaS.”
"This table provides a concise comparison of SaaS
(Software as a Service), IaaS (Infrastructure as a
Service), and PaaS (Platform as a Service) based on
key attributes. It highlights differences in abstraction,
control, examples, customization, patch management,
scalability, security, IT personnel needs, and
development requirements. Please note that the
specifics can vary depending on cloud providers and
offerings."

VPS (Virtual Private Server) is typically associated


with the IaaS (Infrastructure as a Service) model in
cloud computing. In an IaaS model, you gain virtual
access to computing resources such as servers,
storage, and networking. A VPS is essentially a virtual
machine running on cloud infrastructure. You have a
certain degree of control over this virtual machine,
including the ability to install software, configure
operating systems, and perform management tasks.

That said, VPS can also be used in a PaaS (Platform as


a Service) or even SaaS (Software as a Service) model
if integrated into a broader cloud solution. For example,
within a PaaS offering, a VPS may be used to host
specific applications within the platform.

However, in most cases, when you refer to VPS hosting


in cloud computing, it falls within the context of the IaaS
model.
What Is a VPS?
A virtual private server(VPS) is a machine that hosts all the software and data required to run an
application or website. It is called virtual because it only consumes a portion of the server's
underlying physical resources which are managed by a third-party provider. However, you get
access to your dedicated resources on that hardware.

What is a VPS used for?


We give some examples of VPS server use cases below.

Launch web applications


Anyone can use VPS servers to launch and run web applications. For example, Gourmeat, a
gourmet meat store in the United States used Amazon VPS services to launch an inventory
management system within weeks. Before this, they managed inventory using spreadsheet
reports from individual suppliers. Their cloud-based inventory application integrates the reports,
reduces the time spent on inventory management, and gives key decision-makers simultaneous
data access.

Build test environments


With a VPS server, you can develop and test new applications cost-effectively. For
example, Bugout.dev, a US start-up, built a search engine for developers. They regularly run
experiments to test new features and enhance search functionality for their users. Given the
rejection rate of these experiments, they run them on a virtual private server environment to save
costs.

Secondary storage
A virtual private server can provide secondary storage for data files. For example, it can act as a
file, image, or email server by creating secure, accessible, and centralized storage for a group of
users.

What is VPS hosting?


When creating a website or web app, customers generally need to setup a database, configure a
webserver, and add their code. Managing physical server hardware can be complex and
expensive.To solve this problem, hosting providers manage the underlying hardware and allow
users to consume these resources. In VPS hosting, each user receives a virtual machine with
dedicated resources that is ready for them to deploy and configure their application or website.
This way, customers who use VPS hosting can focus on their applications or websites without
having to waste time and energy dealing with the physical servers hosting their code. VPS
hosting providers secure, reliable and consistent performance for their websites.

How does VPS hosting compare to other types of


hosting?
Servers typically have more memory, processing power, and storage than those required by a
single website. Web hosting providers share these resources between different customers using
different hosting arrangements. Other than VPS hosting, two types of web hosting solutions are
available.

Shared hosting
In a shared hosting solution, all websites share the same physical server and compete for critical
resources like internal memory, hard disk space, and processing power. The downside of this
web hosting service is that other websites that share the hardware can affect your website's
performance.

Using a shared hosting service is comparable to watching a movie while sitting on a couch with a
group of friends. Sometimes one friend may stretch and take up more space, while causing the
others to sit uncomfortably until he adjusts himself again.

Dedicated hosting
In dedicated hosting, you can rent the entire physical hardware for yourself. The web hosting
provider gives you exclusive access to the entire physical server. Your website performance is
not impacted by the behaviour of any other websites.

Dedicated hosting is like having the whole couch to yourself. It's comfortable but expensive—and
you don’t really need all that extra space.

VPS hosting vs. shared hosting vs. dedicated hosting


VPS hosting sits somewhere between shared and dedicated hosting. It is done by
compartmentalizing the single physical server so each website owner perceives it as a dedicated
server. You get exclusive access to your share of hardware and resources even though the rest
of the physical server is still being shared.

A VPS hosting service is like hiring a first-class cabin on a luxury flight. While there may be other
passengers on the flight, no one can share your cabin. Even better, your cabin can expand or
shrink with your requirements, so you pay for exactly what you need!

When should you switch to VPS hosting?


If you are currently on a shared hosting service and a dedicated server is not in the budget, you
can consider shifting to a VPS hosting plan if you want to do the following.

Handle more website traffic


Shared hosting may work well when you're just starting out, but your website performance may
start degrading as traffic increases. As your website grows and visitor traffic volume increases,
website visitors could experience longer page load times and more wait times. On the other hand,
with VPS hosting, your site will perform better than with shared hosting because it can handle a
higher volume of requests.

Customize applications 
Compared to shared hosting, VPS hosting gives you more control over your web server
environment. You can install custom software and custom configurations. Integrations with other
software, like bookkeeping or CRM systems, also work better with VPS hosting. You can install
custom security measures and firewalls for your system as well.

Reduce server errors


As your website grows, you may have to add more content or complex functionality to it, which
increases processor or memory requirements. This could lead to server errors on shared hosting,
like internal server errors or unavailable service errors. On the other hand, compute-heavy
websites perform much better on VPS hosting because they no longer compete with other sites
for processing power. With VPS hosting, you can also migrate to a new virtual machine with
greater processing power whenever you are ready to grow.

What are the types of VPS hosting?


There are three main types of VPS hosting.

Unmanaged VPS hosting


In unmanaged hosting or self-managed hosting, the business owner has to take care of all the
server responsibilities and maintenance tasks. The hosting provider manages only the physical
server and its availability. Unmanaged VPS hosting requires technical expertise or dedicated in-
house resources to manage server memory, operating system, and other server resources.
Unmanaged VPS hosting is better suited for established businesses with the necessary IT
capabilities.

Managed VPS hosting


Fully managed VPS hosting reduces the time, effort, and technical expertise you need to take
care of your server. The managed VPS hosting provider takes care of all the server-related
responsibilities, like core updates, maintenance, and software installation, so you can concentrate
fully on growing your business. Managed VPS hosting is a hands-free approach to server
management.

Semi-managed VPS hosting


Semi-managed VPS hosting is the middle ground between managed and unmanaged hosting.
The hosting company provides the same basics as unmanaged hosting but adds core software
installation and support. These are some examples of the additional services they provide:

 Operating system updates and patches


 Security enhancements
 Full web server support
 Server event monitoring
 Proactive response and restoration of server

Core managed hosting differs from fully managed hosting in that core doesn’t include virus and
spam protection, external migrations, full control panel support, or control panel upgrades and
patches.

Is VPS hosting secure?


Yes, VPS hosting is secure. VPS security comes from each instance’s isolation from the other
environments on the server. In shared hosting environments, your website shares the same
resources as other websites and can be affected by their vulnerabilities. VPS hosting protects you
from resource-intensive attacks directed at others. You can also make your virtual server more
secure by adding firewalls, antivirus, and other software measures.

For example, a denial of service (DDoS) attack attempts to bring down a website by
overwhelming the server with thousands of requests at the same time. In a shared hosting
environment, even if the DDoS is directed at another website, it will cause your system to crash
too. This is because both websites share the same underlying resources.

Is VPS hosting fast and reliable?


Yes, VPS hosting is fast and reliable because you are allocated your own bandwidth. You can get
reliable performance similar to a dedicated server. You can also choose different operating
systems and optimize server configuration to better suit your application performance.

Why should you choose VPS?


VPS hosting gives you low-cost access to a trained team of professionals who focus full-time on
server management. It brings the following benefits:

 You receive updated best practices and new technologies in VPS hosting.
 You get round-the-clock support to reduce downtime.
 The VPS hosting provider optimizes your environment for performance and security.
 Your IT team can focus on your web application without having to worry about VPS hosting.
 The VPS hosting provider can troubleshoot and fix common issues very quickly.

What are Amazon VPS hosting services?


Amazon Lightsail offers easy-to-use virtual private server (VPS) instances, containers, storage,
databases, and more at a cost-effective monthly price. With Amazon Lightsail, you gain a number
of features that you can use to quickly bring your project to life. Designed as an easy-to-use VPS,
Lightsail offers you a one-stop-shop for all your cloud needs. Some benefits of Amazon Lightsail
include:

 You can click to launch a simple operating system, a preconfigured application, or a development
stack on your virtual server instance.
 You can store your static content, such as images, videos, or HTML files, in object storage for data
backup.
 You can manage web traffic across your VPS servers so that your websites and applications can
accommodate variations in traffic and be better protected from outages.

Docker
file:///C:/Users/THINKPAD/Downloads/
ContainerizationinCloudComputingforOS-LevelVirtualization.pdf pdf important
chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://
edisciplinas.usp.br/pluginfile.php/318402/course/section/93668/thesis_3.pdf

https://www.researchgate.net/publication/
343599681_On_Security_Measures_for_Containerized_Applications_Imaged_
with_Docker

https://www.geeksforgeeks.org/containerization-using-docker/
Docker is the containerization platform that is used to package your
application and all its dependencies together in the form of containers to
make sure that your application works seamlessly in any environment which
can be developed or tested or in production. Docker is a tool designed to
make it easier to create, deploy, and run applications by using containers. 
 
Docker is the world’s leading software container platform. It was launched in
2013 by a company called Dotcloud, Inc which was later renamed Docker,
Inc. It is written in the Go language. It has been just six years since Docker
was launched yet communities have already shifted to it from VMs. Docker is
designed to benefit both developers and system administrators making it a
part of many DevOps toolchains. Developers can write code without worrying
about the testing and production environment. Sysadmins need not worry
about infrastructure as Docker can easily scale up and scale down the
number of systems. Docker comes into play at the deployment stage of the
software development cycle. 
 

Containerization 
Containerization is OS-based virtualization that creates multiple virtual units
in the userspace, known as Containers. Containers share the same host
kernel but are isolated from each other through private namespaces and
resource control mechanisms at the OS level. Container-based Virtualization
provides a different level of abstraction in terms of virtualization and isolation
when compared with hypervisors. Hypervisors use a lot of hardware which
results in overhead in terms of virtualizing hardware and virtual device
drivers. A full operating system (e.g -Linux, Windows) runs on top of this
virtualized hardware in each virtual machine instance. 
But in contrast, containers implement isolation of processes at the operating
system level, thus avoiding such overhead. These containers run on top of
the same shared operating system kernel of the underlying host machine and
one or more processes can be run within each container. In containers you
don’t have to pre-allocate any RAM, it is allocated dynamically during the
creation of containers while in VMs you need to first pre-allocate the memory
and then create the virtual machine. Containerization has better resource
utilization compared to VMs and a short boot-up process. It is the next
evolution in virtualization. 
Containers can run virtually anywhere, greatly easy development and
deployment: on Linux, Windows, and Mac operating systems; on virtual
machines or bare metal, on a developer’s machine or in data centers on-
premises; and of course, in the public cloud. Containers virtualize CPU,
memory, storage, and network resources at the OS level, providing
developers with a sandboxed view of the OS logically isolated from other
applications. Docker is the most popular open-source container format
available and is supported on Google Cloud Platform and by Google
Kubernetes Engine. 
 

Docker Architecture
Docker architecture consists of Docker client, Docker Daemon running on
Docker Host, and Docker Hub repository. Docker has client-server
architecture in which the client communicates with the Docker Daemon
running on the Docker Host using a combination of REST APIs, Socket IO,
and TCP. If we have to build the Docker image, then we use the client to
execute the build command to Docker Daemon then Docker Daemon builds
an image based on given inputs and saves it into the Docker registry. If you
don’t want to create an image then just execute the pull command from the
client and then Docker Daemon will pull the image from the Docker Hub
finally if we want to run the image then execute the run command from the
client which will create the container. 
 

Components of Docker
The main components of Docker include – Docker clients and servers,
Docker images, Dockerfile, Docker Registries, and Docker containers. These
components are explained in detail in the below section : 
 
1. Docker Clients and Servers– Docker has a client-server architecture.
The Docker Daemon/Server consists of all containers. The Docker
Daemon/Server receives the request from the Docker client through CLI or
REST APIs and thus processes the request accordingly. Docker client and
Daemon can be present on the same host or different host. 
 
1. Docker Images– Docker images are used to build docker containers by
using a read-only template. The foundation of every image is a base
image eg. base images such as – ubuntu14.04 LTS, and Fedora 20. Base
images can also be created from scratch and then required applications
can be added to the base image by modifying it thus this process of
creating a new image is called “committing the change”.
2. Docker File– Dockerfile is a text file that contains a series of instructions
on how to build your Docker image. This image contains all the project
code and its dependencies. The same Docker image can be used to spin
‘n’ number of containers each with modification to the underlying image.
The final image can be uploaded to Docker Hub and shared among
various collaborators for testing and deployment. The set of commands
that you need to use in your Docker File is FROM, CMD, ENTRYPOINT,
VOLUME, ENV, and many more.
3. Docker Registries– Docker Registry is a storage component for Docker
images. We can store the images in either public/private repositories so
that multiple users can collaborate in building the application. Docker Hub
is Docker’s cloud repository. Docker Hub is called a public registry where
everyone can pull available images and push their images without
creating an image from scratch.
4. Docker Containers– Docker Containers are runtime instances of Docker
images. Containers contain the whole kit required for an application, so
the application can be run in an isolated way. For eg.- Suppose there is an
image of Ubuntu OS with NGINX SERVER when this image is run with the
docker run command, then a container will be created and NGINX
SERVER will be running on Ubuntu OS. 
 
 

Docker Compose
Docker Compose is a tool with which we can create a multi-container
application. It makes it easier to configure and 
run applications made up of multiple containers. For example, suppose you
had an application that required WordPress and MySQL, you could create
one file which would start both the containers as a service without the need
to start each one separately. We define a multi-container application in a
YAML file. With the docker-compose-up command, we can start the
application in the foreground. Docker-compose will look for the docker-
compose. YAML file in the current folder to start the application. By adding
the -d option to the docker-compose-up command, we can start the
application in the background. Creating a docker-compose. YAML file for
WordPress application : 
 
#cat docker-compose.yaml
version: ’2’
services:
db:
image: mysql:5.7
volumes:db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: WordPress
MYSQL_DATABASE: WordPress
MYSQL_USER: WordPress
MYSQL_PASSWORD: WordPress
WordPress:
depends_on:
- DB
image: WordPress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306

WORDPRESS_DB_PASSWORD: wordpress
volumes:
db_data:
In this docker-compose. YAML file, we have the following ports section for
the WordPress container, which means that we are going to map the host’s
8000 port with the container’s 80 port. So that host can access the
application with its IP and port no. 
 

Docker Networks
When we create and run a container, Docker by itself assigns an IP address
to it, by default. Most of the time, it is required to create and deploy Docker
networks as per our needs. So, Docker let us design the network as per our
requirements. There are three types of Docker networks- default networks,
user-defined networks, and overlay networks. 
 
To get a list of all the default networks that Docker creates, we run the
command shown below – 
 

There are three types of networks in Docker – 


 
1. Bridged network: When a new Docker container is created without the –
network argument, Docker by default connects the container with the
bridge network. In bridged networks, all the containers in a single host can
connect through their IP addresses. A Bridge network is created when the
span of Docker hosts is one i.e. when all containers run on a single host.
We need an overlay network to create a network that has a span of more
than one Docker host.
2. Host network: When a new Docker container is created with the –
network=host argument it pushes the container into the host network stack
where the Docker daemon is running. All interfaces of the host are
accessible from the container which is assigned to the host network.
3. None network: When a new Docker container is created with the –
network=none argument it puts the Docker container in its network stack.
So, in this none network, no IP addresses are assigned to the container,
because of which they cannot communicate with each other.
We can assign any one of the networks to the Docker containers. The –
network option of the ‘docker run’ command is used to assign a specific
network to the container. 
 
$docker run --network ="network name"
To get detailed information about a particular network we use the command- 
 
$docker network inspect "network name"
 

Advantages of Docker –
Docker has become popular nowadays because of the benefits provided by
Docker containers. The main advantages of Docker are: 
 
1. Speed – The speed of Docker containers compared to a virtual machine is
very fast. The time required to build a container is very fast because they
are tiny and lightweight. Development, testing, and deployment can be
done faster as containers are small. Containers can be pushed for testing
once they have been built and then from there on to the production
environment.
2. Portability – The applications that are built inside docker containers are
extremely portable. These portable applications can easily be moved
anywhere as a single element and their performance also remains the
same.
3. Scalability – Docker has the ability that it can be deployed on several
physical servers, data servers, and cloud platforms. It can also be run on
every Linux machine. Containers can easily be moved from a cloud
environment to a local host and from there back to the cloud again at a
fast pace.
4. Density – Docker uses the resources that are available more efficiently
because it does not use a hypervisor. This is the reason that more
containers can be run on a single host as compared to virtual machines.
Docker Containers have higher performance because of their high density
and no overhead wastage of resources.

You might also like