Professional Documents
Culture Documents
CHAPTER 6:VM’S
1. What are virtual machines?
2. Types of Virtual Machine
3. Identifying workloads for Azure Virtual Machines
4. Virtual machine sizing
5. two tiers
Cloud Computing was invented in the early 1960s by J.C.R Licklider (Joseph
Carl Robnett Licklider), an American Psychologist and Computer Scientist.
During his network research work on ARPANet (Advanced Research Project
Agency Network), trying to connect people and data all around the world, gave
an introduction to Cloud Computing technique which we all know today. Born
on March 11th, 1915 in St. Louis, Missouri, US, J.C.R Licklider pursued his
initial studies from Washington University in 1937 and received a BA Degree
with three specializations including physics, maths, psychology. Later in the
year 1938, Licklider completed his MA in psychology and received his Ph.D.
from the University of Rochester in the year 1942. His interest in Information
Technology and looking at his years of service in different areas and
achievements, made his appointed as Head of IPTO at ARPA (US Department
of Defense Advanced Research Project Agency) in the Year 1962. His aim led
to ARPANet, a forerunner of today’s Internet.
The beauty of the cloud computing phase went on running throughout the era of
the 21st Century. By mid-2000s probably 2006 Amazon created AWS (Amazon
web services) and also noted its Elastic Computing Cloud (EC2). By 2008,
Google too introduced its beta version of the search engine. Earlier announced
by Microsoft in the year 2008, it released its cloud computing service named
Microsoft Azure for testing, deployment and managing applications and
services. In the year 2012, Google compute engine was released but was rolled
to the public by the end of Dec 2013. Oracle introduced Oracle Cloud with three
primary services for business (IaaS, PaaS and SaaS). Currently, as per records,
Linux and Microsoft Azure share most of their work parallel.
Basic Concepts of the cloud
Cloud computing has now become an ideal way to deliver solutions and
enterprise applications for different businesses across the globe. The History of
Cloud Computing started in the early 1960s. During this period the concepts of
time-sharing took a rise via Remote Job Entry. This terminology was associated
with IBM and DEC (Digital Equipment Corporation). Due to this growth, full
time-sharing systems were available by the early 1970s. By the 1990’s, few
telecommunication giants started offering VPN (Virtual private network)
services at affordable costs. As they could do by switching traffic with proper
server use, it made them use the overall network more effectively. By 1994, the
cloud metaphor was started to be used for virtualized services.
Benefits of cloud
Amazon Web Services is the cloud services offered by Amazon for companies
and individuals on the basis of pay-as-you-go services. It offers infrastructure,
platform, and software as services to the users. AWS has become synonymous
with public cloud providers as the platform offers various services globally.
AWS offers three main products such as EC2, Glacier and S3. EC2 offers
virtual machine services to the users whereas Glacier is the low-cost cloud
storage service. S3 is used for Amazon Storage system. Database storage and
computing power with its fast delivery make AWS popular among the users of
public cloud. Any businesses can easily adapt AWS and this is an advantage
along with its scalability.
2. Microsoft Azure
Azure is the best competitor for AWS with its various services and different
ways to achieve the same. The data centers are managed by Microsoft and it
helps in deploying, managing, building, and testing the applications in favor of
the user. It also has the infrastructure, platform, and software as a service
similar to AWS and offers services in analytics, virtual computing, and
networking along with storage. Along with running applications, it also helps to
upgrade the applications and improve the applications to the maximum
capabilities. The services can be scaled up or down based on the needs and the
cost incurred also depends on the services you use. It is the only cloud
computing platform that can be learnt without the knowledge of programming
language.
This competition between tech giants to offer cloud platforms is a benefit for
the users as the users can know various platforms for cloud services and can use
the one which is more suiting the needs. GCP uses the same infrastructure as
google uses and can reach the users globally. The application can be built and
ran in the GCP with the help of data centers and analytics installed in the
service. Cost is managed in the platform and the data storage can be managed
efficiently. Along with cloud management, networking, artificial intelligence
and security is other services offered in GCP. Any data sets, be it structured or
unstructured can be managed on this platform. It uses infrastructure as a service
and integrated with the Linux operating system to incorporate the scripts and
manage the cloud operating services and offer it to the user. If the user has any
SQL-based or Microsoft-based units in the services, these cannot be used in
GCP. And this stays as a disadvantage for the platform to outgrow other cloud
services.
Other public cloud providers are not that prominent among users but still being
used by a small fraction of people. They are discussed below.
4. Oracle Cloud
The cloud service offered by Oracle company to build, test, and deploy the
applications so that users of oracle worldwide can use the servers and
applications is called Oracle cloud. Though not a common cloud service
preferred by users as such, it has infrastructure and platform services that please
the users with its services of network and storage. High capacity applications
can be easily used in the cloud service and it can be customized easily so that
other applications can be used well with Oracle application. The database
offered by Oracle is SQL and it has a great many users. Transferring data from
SQL to oracle cloud is not a big challenge. Also, oracle offers the users to
transfer the data out of its cloud with free charges. It charges the customers only
for object storage services. Cloud ERP is different from other cloud services
and improving the cloud ERP can bring the Oracle cloud to the front circle of
cloud providers.
5. IBM Cloud
The cloud services offered by IBM company is called IBM cloud where the
platform as a service and infrastructure as a service is offered to the users.
Virtual resources can be deployed and used by the users which make the storage
and networking easily accessible. Even for higher workloads, the cloud service
is reliable and this makes the IBM cloud a good one among others. Agile
methodologies are used in IBM so that data transfer happens faster from one
platform to another. Some open-source technologies are used in the IBM cloud
to make the DevOps operation working.
6. Alibaba Cloud
7. Salesforce Cloud
Characteristics of cloud
Cloud Computing Architecture is divided into two parts, i.e., front-end and
back-end. Front-end and back-end communicate via a network or internet. A
diagrammatic representation of cloud computing architecture is shown below:
It provides applications and the interfaces that are required for the cloud-
based service.
It is responsible for monitoring all the programs that run the application on the
front-end
It has a large number of data storage systems and servers. The back-end is an
important and huge part of the whole cloud computing architecture, as shown
below:
Application
Service
In a Cloud, few widely used services among the end-users are storage
application development environments and web services
Looking forward to enhancing your cloud computing skills? Enroll in our Cloud
Computing Certification Course today and take your career to new heights.
Storage
It stores and maintains data like files, videos, documents, etc. over the
internet
Amazon S3
Oracle Cloud-Storage
Its capacity varies depending upon the service providers available in the
market
Management
Its task is to allot specific resources to a specific task, it simultaneously
performs various functions of the cloud environment
Security
It reduces IT operating costs and gives good accessibility to access data and
digital tools
It helps businesses to easily scale up and scale down their cloud resources
Hypervisor
Management Software
Deployment Software
Network
Cloud Server
Cloud Storage
Virtualization is the key to unlock the Cloud system, what makes virtualization
so important for the cloud is that it decouples the software from the hardware.
For example, PCs can use virtual memory to borrow extra memory from the
hard disk. Usually, a hard disk has a lot more space than memory.
CHAPTER 3: CLOUD SERVICE MODELS
SAAS
SAAS (Software as a Service) Cloud refers to the software licensing and
delivery model where third-party cloud providers host applications and services
over an internet platform and the user requires a license for using those software
services. SAAS is one of the topmost cloud computing services that gains its
importance after the late 1990s extending the idea of the ASP model. It is
typically based on the web and from 2012, SAAS vendors continue to develop
and manage their own software. It has been widespread because of its
flexibility, as it requires only a web browser to run.
The SAAS vendor offers a great advantage over other service providers by
allowing the buyers to outsource most of the IT responsibilities without having
to invest in hardware platforms to host the software. The apps can be accessed
by users via a web browser that has been a delivery model for various business
applications including office software, management software, accounting, CRM
tool, and various talent acquisition teams from various departments. It can be
categorized into two types:
1. Vertical SAAS
2. Horizontal SAAS
SAAS Cloud has more benefits that will be discussed below which implements
all the features of cloud computing making a robust experience
Why We Use SAAS Cloud?
SAAS Cloud is a technology that rents software-based cloud platform services
and is also a replacement model for such cloud platform services that the
organization can access. For simplicity and reliability, anyone ranging from
small to large-scale business use the technology of SAAS that needs an internet
connection to run with the help of a web browser. So, we prefer using the SAAS
cloud because of various reasons. I will discuss some of the basic points which
are as follows:
For any scaled enterprise, they feel the necessity of using the SAAS cloud
because of the low cost of services. We can purchase as much software as
we want of our choice and will pay according to that.
In SAAS, we do not require any IT specialist team to work for. So during
a shortage of resources, or when application development and
maintenance become a big issue, SAAS comes into the picture.
SAAS application knows about its software creation that includes a bunch
of experienced professionals and we can rely upon them.
Using the SAAS cloud, everything is processed and stored on the cloud
itself that makes it easier to store hundreds and thousands of files and can
access the same from the cloud whenever we want.
PaaS
is a type of cloud computing product in which a service provider provides
customers with a platform that enables them to build, operate and manage
business applications without the infrastructure needed for such software
development. Because developers and other users do not see the underlying
infrastructures, the PaaS Architectures are similar to serverless principles,
where a cloud service provider owns, operates the server, and controls resource
allocation.
What is PaaS?
Platform as a service is a cloud-based development model that enables the user
to deliver starting from simple applications needed in day-to-day life to
centralized applications required for big organizations.
The working procedure of PaaS is to provide a smooth working principle
in the cloud that includes the entire application development phase in the
cloud.
In the PaaS cloud computing platform, the cloud service provider at the
back end handles scalability and the end-user does not have to worry
about managing the infrastructure.
With the help of PaaS, we also get additional resources, including
database management systems, programming languages, libraries, and
various software development tools that work on the cloud and made our
daily life better.
It has varieties of usefulness and among them, it cuts down the price and
headache of installing extra software licenses, core applications, and
other platform resources.
The architectures are used to hide their underlying infrastructure from
developers and other end users. As a result, the model becomes a
serverless computing model and function-as-a-service architecture where
the cloud service provider manages and runs the server and controls the
distribution of resources.
It helps us to organize and maintain useful applications and services
whereas third-party providers maintain every other service in the cloud.
Nevertheless, how exactly does it work? I will provide some basic points that
will contain the working principles :
PaaS is actually a sandwiched layer between SaaS and IaaS that contains
all the middleware platform tools.
PaaS provides a pay-per-use feature. Therefore, without having to buy,
configure, develop, maintain and install every application to work and use
for their purpose that also requires a strong maintenance team to maintain
them and service them on a daily basis, one can easily just use and pay
for that price that they require for their usage. That is why for proper and
best utilization, one can pay only for their usage of their applications.
There are various types of PaaS service providers are present that has
very useful features. Those are:
1. PaaS provides the user a basic data storage and a server for maintaining
all computing systems that are required for servicing.
2. Support the use of powerful web engines and platforms including Google
applications.
3. Support social networking sites like Facebook.
4. Daily essential commodities that really make changes to everyday life.
There are a few disadvantages of PaaS that I will discuss right now:
Data Privacy: Data is a big risk and is kept private and stored in the
server most of the time to maintain data privacy.
Integration: There are chances that data mismatch can happen while
integrating data because data are stored both in local storage and cloud.
So it is very difficult to differentiate them and the users felt difficulty
while accessing the data whenever they want.
IaaS
Introduced in the year 2012 by Oracle, IaaS is a cloud computing platform
based model, known as Infrastructure as a service. Formerly IaaS was termed as
Hardware as a Service (HaaS). Cloud computing has three distinct layers such
as PaaS, SaaS, out of which IaaS is one. Here the customer organization
outsources its IT Infrastructures, where customers access resources such as
(server, storage, virtual machines) over the internet. Those resources are
accessible in the cloud computing platform based on pay per use model. Here
the clients are billed only for the services used, further, it helps to reduce cost
and complexity of investing and managing a physical server.
What is IaaS?
Benefits of IaaS
IaaS offers lots of benefits to its customers ensuring affordability and business
growth. Given below are few benefits:
IaaS provides a “pay per use” scheme, which says the services provided
can be used as per needed, and users need to pay only for the services
utilized.
The services provided by IaaS are quite scalable as they make sure that
the resources are available to the users at the desired time and demand
and also ensure that there is no wastage of capacity if left out.
It saves a lot of time and cost as the cloud service provider is responsible
for setting and maintaining the physical hardware.
The service provided is unaffected and remains constant, even though any
hardware failure occurs.
It provides a great value of flexibility as it scales down the resources
quickly as per the needs of customers.
IaaS mostly focuses on business growth as there any hardly any time is
spent on technological and business decisions and how to maintain the
infrastructure. IaaS takes care of all of these.
IaaS cloud computing platform provides secure data centers with
24/7/365 availability.
Examples of IaaS
IaaS is highly flexible and it’s basically used in Ecommerce and Non-e-
commerce platforms.
A suitable example is Amazon Web Services (AWS EC2), where it
provides scalability to host cloud-based applications, Moreover EC2
users need not have physical servers, AWS provides a virtual
environment to work on. In this, the cost is minimized and the users only
pay for the services they booked.
In the Case of the E-commerce platform, it depends upon the user’s
interest in hosting the applications either on cloud or on-premise. Here
also the users pay for the services actually used (i.e. hosting plan for the
server).
A virtualized Data Center is established to provide cloud hosting options,
integrating the cloud operations. The Data Center contains several Virtual
servers that meet the user demand as per their business requirements.
Another cloud computing service provider known as Digital Ocean,
founded in the year 2011, which provides IaaS (Infrastructure as a
Service) for open source developers. Mostly Digital Ocean provides
droplets, where a developer can resize the droplets after creating them.
Developers can scale and grow their business through the Digital Ocean
more efficiently.
IaaS can manage big data to handle large workloads and integrate with BI
tools.
GCE (Google Compute Engine) is an IaaS component that runs search
engine, Gmail and other services.
CHAPTER 4: RESOURCE GROUPS
A resource group in Azure is the next level down the hierarchy. At this level,
administrators can create logical groups of resources—such as VMs, storage
volumes, IP addresses, network interfaces, etc.—by assigning them to an Azure
resource group. Resource groups make it easier to apply access controls,
monitor activity, and track the costs related to specific workloads. You decide
how you want to allocate resources to Azure resource groups based on what
makes the most sense for your organization, but as a best practice, we suggest
assigning resources with a similar lifecycle to the same Azure resource group,
so you can easily deploy, update, and delete them as a group.
The resource group collects metadata from each individual resource to facilitate
more granular management than at the subscription level. This not only has
advantages for administration and cost management, but also for applying role-
based access controls.
The underlying technology that powers resource groups is the Azure Resource
Manager (ARM). ARM was built by Microsoft in response to the shortcomings
of the old Azure Service Manager (ASM) technology that powered the old
Azure management portal. With ASM, users could create resources in an
unstructured manner, leading to many challenges in tracking such resources or
understanding their dependencies. This led to huge difficulties that discouraged
people from using Azure, as they would have a big mess once they had to deal
with multiple applications that spanned more than one region.
Azure Resource Manager, which was announced in 2014 and became generally
available in 2017, addresses this challenge and others by providing a new set of
application programming interfaces (APIs) that are used to provision resources
in Azure. ARM requires that resources be placed in resource groups, which
allows for logical grouping of related resources.
In the figure below, two resource groups are used for grouping: First, those that
are related to a line-of-business(LOB) application, and second, to an
infrastructure as a service (IaaS) workload.
Although creating a resource group requires specifying a region for it to be
stored in, the resources in that resource group could span multiple regions. The
requirement to have a region for a resource group comes from the need to store
the deployment metadata (definitions) associated with it in a specific location
and does not dictate that resources belonging to it need to be in the same region.
Before ARM, you had to provision resources independently and you had to
have a good understanding of their dependencies and accommodate for them in
deployment scripts. As an example, to create a virtual machine, you needed to
create a storage account, a virtual network, a subnet, etc. first.
On the other hand, ARM can figure out the dependencies of resources that need
to be provisioned before creating the virtual machine and what order they need
to be provisioned in, saving the user from having to repeat their work to fix
unnecessary errors during deployment. In the example above, ARM will
automatically create the virtual network and storage account simultaneously
with the virtual machine. The portal blades walk the user through defining the
related resources as part of provisioning the virtual network. As an example, in
the screenshot below, you can see that creating a virtual machine requires
specifying the other dependencies in the settings blade, including the virtual
network, the subnet, the public IP address, and storage account, among other
things.
ARM provides the ability to provision resources declaratively using JavaScript
Object Notation (JSON) documents. The JSON document may include a
description of multiple resources that need to be provisioned in a resource
group, and ARM knows how to provision them accordingly. This provides an
added flexibility and ease in managing resources belonging to resource groups.
Using JSON in this manner allowed for creating resource templates that would
make it much faster to provision resources belonging to a resource group. This
also allowed for third party providers to make hundreds of templates available
to provision different resources that correspond to many deployment scenarios.
Those templates are available from code repositories such as GitHub or
the Azure Marketplace.
In the ARM architecture, resource groups not only become units of deployment,
but also units of management of related resources. This allows users to
determine the cost associated with the whole resource group, making
accounting and chargebacks more manageable. It also allows role-based access
control (RBAC) at the resource group level, making it much easier to manage
user access to the resources in the group. When users log into the Azure Portal,
they will only see resource groups they have access to and not others within the
subscription. Administrators will still be able to assign access control for users
to individual resources within the resource group based on their roles.
Management of Azure Resource Group
Aside from scripting (e.g. using PowerShell or the Azure CLI 2.0,) resource
groups can only be managed in the new Azure portal that became generally
available last year. If using the new Azure Portal, a resource group item is
available in the navigation menu by default and can be used to open the RG
management “blade,” as you can see in the screenshot below.
The
RG management blade provides a straightforward way to create and manage
resource groups. It also provides a flexible customizable, high-level view of
available resource groups in a specific Azure subscription. The user can select
what columns to see in this view based on their role and interests and may use
filters to zoom in on resource groups specific to a subscription or a location.
The new portal was designed to work well with the ARM concepts and
architecture providing great flexibility and user experience in how resources are
displayed and managed. The portal displays blades to the user as additional
resources are created. A blade is a self-contained page (or set of pages) that
allows the user to view and manage all aspects of the resource they have created
using a step-by-step wizard-like approach for building the resource.
Despite the advantages resource groups and ARM bring to Azure users, it is
important to use them with care and good insight. The key to having a
successful design of resource groups is understanding the lifecycle of the
resources that are included in them. For instance, if an application requires
different resources that need to be updated together, such as having a SQL
database, a web app, a mobile app, etc., then it makes sense to group these
resource in the same resource group. It is important, however, to use different
resource groups for dev/test, staging, or production, as the resources in these
groups have different lifecycles.
Conclusion
A virtual network is a network where all devices, servers, virtual machines, and
data centers that are connected are done so through software and wireless
technology. This allows the reach of the network to be expanded as far as it
needs to for peak efficiency, in addition to numerous other benefits.
A local area network, or LAN, is a kind of wired network that can usually only
reach within the domain of a single building. A wide area network, or WAN, is
another kind of wired network, but the computers and devices connected to the
network can stretch over a half-mile in some cases.
There are three classes of virtual networks, VPN, VLAN, and VXLAN:
VPN
VPN stands for virtual private network. Essentially, a VPN uses the internet to
connect two or more existing networks. This internet-based virtual network
allows users to log in from anywhere to access the physical networks that are
connected. VPNs are also used for masking internet use on public WiFi and
ensuring secure browsing. A VPN is created when data attached to packets
defines routing information that takes users to the applicable address. In doing
this, a tunnel of addresses is created, encrypting the browsing history and
making it possible to access information remotely. VPNs provided a small-
scope, fully virtual network that uses the internet to allow for people to connect.
VLAN
VXLAN
VXLAN means virtual extensible local area network. In this network, your level
3 network infrastructure provides a tunnel into level 2. Virtual switches create
endpoints for each tunnel, and another piece of technology, called a physical or
virtual base case, can route data between endpoints.
IP Address” is the short way to address the term “Internet Protocol Address” IP
Addresses refer to a number scheme or the way of providing a unique number to
every computer or device that connected to the internet. VINT CERF “the father
of the internet” was considered to play a vital role in creating IP Addresses
when he used to work for DARPA. The most important features of an IP
Address are:
• Unique.
• Globalized and Standardized.
• Essential.
In simple terms, it can be explained as the personal address of the device that is
distinctive and specially created for that device. No two devices on the internet
can have the same IP Address. For our convenience, we use the names to find
things on the internet like if we need to look for Punjab University Chandigarh
on the internet, we merely write www.puchd.in but on the machine end, this
address is converted in some numerical address so that we can send data to the
right location. These IP Addresses are the part of the NETWORK LAYER of
the OSI Model, whose primary function is to navigate data between the source
and the destination.
IPV4 Version
The First Version IPv4 is the most widely used Internet Protocol. IPv4 addresses
are written in the form of a string which consists of 4 numbers with a 3 digit
section which lies between the ranges of 0-255. Each number separated by a dot.
Each section can be represented in binary form with each section having 8 bits.
An IP Address can be written in any form, i.e., binary, octal, and hexadecimal if
required. The IPv4 is of size 32-bit storage of maximum that means we can store
at max (232) addresses. IPv4 has around 4 billion unique IP addresses. Even out
of these addresses some addresses are kept reserved for exclusive use under the
category of Private Networks and Multicasting Addresses. A typical IPv4
address looks like as follows:
IPAddress:192.168.90.1
Binary notation: 11000000 . 10101000 . 01011010 . 00000001
IPv4 class is a way of division of addresses in the IPv4 based routing. Separate
IP classes are used for different types of networks. They can be explained as
follows
CLASSES Range
IPv6 Version
IPv6 Addresses were written using hexadecimal so that they can fit
more information using lesser digits. The typical IPv6 address was a long string
of numbers in comparison to IPv4. IPv6 uses 128 binary bits to create a single
address; the IP address expressed by 8 groups of hexadecimal numbers. Here we
used a colon instead of dots to separate the sections of digits. Here if we find 2
colon side by side, that means that all sections between them contain only 0’s.
Let’s see the example of address with and without colons below:
With double colon ->2001:0db7::54
Without double colon -> 2001:0db7:0000:0000:0000:0000:0000:0054
Subnetting
Subnetting refers to the concept of dividing the single vast network into more
than one smaller logical sub-networks called as subnets. Sub net is related to IP
Address as it borrows a bit from the host part of the IP Address. Thus the IP
The subnet is formed by taking the last bit from the network component of the
IP address and used to specify the number of subnets required. Subnetting allows
having various sub-networks within the big network without having a new
network number through IPS. Subnetting reduces network traffic and
complexity. The purpose of introducing the concept of Subnetting was to fulfill
the shortage of IP Addresses. The Subnetting process helps in dividing the class
A, class B, and class C network numbers into smaller parts. A subnet can further
be broken down into smaller networks known as sub-subnets.
IP address Assignment
1.Static.Address.
2. Dynamic Address.
If we need to set up a web server or an email service, then we need to use a Static
IP Address. Whereas if we want to surf the internet, we need a Dynamic IP
Address.
Static IP Address
A static address is also known as a fixed address which means the system with
static address have the same address when it is connected over the internet too.
These addresses are excellent in terms for those who perform activities related
to web hosting, games, voice over internet protocol, etc., These addresses are
generally used by persons using commercial lease lines or the public
organizations who need same IP address every time.
Dynamic IP Address
Public IP Address
The public IP address is the unique address given to all computers attached to
the network. No two machines on the network can have the same IP address.
Using these addresses machines can exchange information between each other
and can communicate with one another over the network. The user has no control
over the Public IP address as it is provided to him by the ISP whenever the
machine connected to the internet. A public address can be of any nature, i.e.,
static or dynamic. It depends upon the need and requirements of the user. Mostly
the users have the dynamic type of Public IP address.
Private IP Address
The organizations (IANA) that distribute the IP addresses for use have kept a
range of addresses as private addresses for the private network. Private addresses
are the addresses that are used by private networks like home or office networks.
Here the logic is that these addresses are used within single administration and
not on the global network or the internet. The range of addresses set aside for
private networks is as follows:
The device within a private network cannot be connected to the internet directly.
If the computer within private network can connect to internet or another
network, then that means those computers have both a public IP address as well
as a private IP address private IP address to communicate within the network
and public IP address to communicate over the internet. If we want to
communicate with another private network, then this could be achieved by using
a router or a similar device like Network Address Translation (NAT). We can
see our computer’s private IP address by using the command ipconfig IPV4
Address on the window command prompt. Mostly the private IP addresses are
of Static nature.
System virtual machines: These types of virtual machines are also termed
as full virtualization VMs. It facilitates a replacement for an actual
machine. These VMs offers the functionality required for executing the
whole operating system (OS). A hypervisor applies native execution for
managing and sharing hardware. It permits for more than one environment
that is separated from each other while exists on a similar physical
machine. Novel hypervisor applies virtualization-specific hardware and
hardware-assisted virtualization from various host CPUs primarily.
o Process virtual machines: These Virtual Machines are created for
executing several programs of the computer within the platform-
independent environment.
A few VMs are developed for emulating distinct architectures like QEMU. It
permits the execution of operating system and software applications written for
other architectures or CPU. Operating-system-level virtualization permits the
computer resources to be categorized by the kernel.
The host could emulate various guests, all of which could emulate distinct
hardware platforms and operating systems.
A craving to execute more than one operating system was a starting objective of
the virtual machines. It allows time-sharing between many individual tasking
operating systems. A system VM can be could be considered the concept
generalization of virtual memory that preceded it historically.
CMS/CP of IBM, the initial systems that permit full virtualization, equipped to
be sharing by giving all users an individual-user OS (Operating System). The
system VM designated the user for writing privileged instructions inside the code.
This type of method has some advantages like including output/input devices not
permitted by any standard system.
It is useful especially for various read-only pages, like those containing code
segments. It is a case for more than one VM executing the similar or
same middleware components, web servers, software libraries, software, etc.
A guest OS doesn't require to be compliant with any host hardware, hence making
it feasible to execute distinct OS on a similar computer (such as an operating
system's prior version, Linux, or Windows) for supporting future software.
The virtual machine can be used for supporting isolated guest OS. It is popular
regarding embedded systems. A common use might be to execute the real-time
operating system with a preferred complicated operating system simultaneously
such as Windows or Linux.
Other uses might be for unproven and novel software that is still in the stage of
development, thus it executes in a sandbox. VMs have other aspects of OS
development. It may contain faster reboots and developed debugging access.
More than one virtual machine running their guest OS is engaged for the
consolidation of the server frequently.
The process virtual machine has become famous with the Java
programming language. It can be implemented with the Java virtual machine.
Another example includes the .NET Framework and Parrot virtual
machine which executes on the virtual machine known as the Common
Language Runtime. Each of them could be served as the abstraction layer for a
computer language.
The process virtual machine has a special case for those systems that essence on
the communication mechanisms of the (heterogeneous potentially) computer
clusters. These types of virtual machines do not include any individual process,
although one process/physical machine inside the cluster.
They don't hide a fact that communication takes place and attempt to illustrate a
cluster as an individual machine.
This system doesn't give a particular programming language, unlike other types
of process virtual machines, although, they are embedded within any existing
language. Such any system typically facilitates binding for many languages (like
FORTRAN and C).
Examples are MPI (Message Passing Interface) and PVM (Parallel Virtual
Machine). They are not virtual machines strictly because various applications
executing on the top still contain access to every OS service. Thus, they are not
restricted to the model of the system.
Full Virtualization
Some of the examples outside the field of mainframe include Egenera vBlade
technology, Win4Lin Pro, Win4BSD, Mac-on Linux, Adeos, QEMU, VMware
ESXi, VMware Server (also known as GSX Server), VMware Workstation,
Hyper-V, Virtual Server, Virtual PC, Oracle VM, Virtual Iron, VirtualBox,
Parallels Desktop for Mac, and Parallels Workstation.
Hardware-assisted virtualization
This type of virtualization was first defined in 1972 on the IBM System/370. It
was introduced for applying with VM/370. The initial virtual machine OS
provided by IBM was the official product.
AMD and Intel give additional hardware for supporting virtualization in 2006 and
2005. In 2005, Sun Microsystems (Oracle Corporation) have included similar
aspects in the UltraSPARC T-Series processors. Virtualization platform's
examples adapted to some hardware include Parallels Workstation, VirtualBox,
Oracle VM Server for SPARC, Parallels Desktop for Mac, Xen, Windows Virtual
PC, Hyper-V, VMware Fusion, VMware Workstations, and KVM.
First-generation 64-bit and 32-bit x86 hardware support have been detected to
facilitate performance benefits on software virtualization in 2006.
Operating-system-level virtualization
The environment of the guest operating system shares a similar running instance
of an operating system as any host system. Hence, a similar operating system
kernel is used for implementing guest environments. Also, various applications
that are running within the provided guest environment consider it as the stand-
alone system.
The original implementation was FreeBSD jails. Another example includes iCore
Virtual Accounts, Parallels Virtuozzo Containers, AIX Workload Partitions,
LXC, Linux-Vserver, OpenVZ, Solaris Containers, and Dockers.
Full virtualization can be possible with the accurate combination of software and
hardware elements only. For example, full virtualization is not possible using
most of the System/360 series of IBM and early System/360 system of IBM.
In 1972, IBM included virtual memory hardware to the series of System/370
which is not similar to the Intel VT-x Rings. It facilitates a higher-level of
privilege for the hypervisor to handle virtual machines properly.
A few machine instructions could be run via the hardware directly since all the
effects are contained entirely in the components which are handled by the control
programs like arithmetic registers and memory locations.
Although, other instructions (that can pierce the VM) can't be permitted to run
directly. They should rather be simulated and trapped. These types of instructions
either affect or access the state data that is external to the VM.
Full virtualization is highly successful for some of the following reasons:
o Separating users from one other (or from the control program)
o Distribute a single computer system between more than one user
o Imitating new hardware for achieving improved productivity, security, and
reliability.
Advantages of VM
o Virtual Machine facilitates compatibility of the software to that software
which is executing on it. Hence, each software specified for a virtualized
host would also execute on the VM.
o It offers isolation among distinct types of processors and OSes. Hence, the
processor OS executing on a single virtual machine can't change the host
of any other host systems and virtual machines.
o Virtual Machine facilitates encapsulation. Various software present over
the VM could be controlled and modified.
o Virtual machines give several features such as the addition of new
operating system. An error in a single operating system will not affect any
other operating system available on the host. It offers the transfer of many
files between VMs, and no dual booting for the multi-OS host.
o VM provides better management of software because VM can execute a
complete stack of software of the run legacy operating system, host
machine, etc.
o It can be possible to distribute hardware resources to software stacks
independently. The VM could be transferred to distinct computers for
balancing the load.
The scale sets are Azure compute resources that can be used to deploy and
manage identical VMs. They are designed to support virtual machine auto-
scaling. VM scale sets can be created using the Azure portal, JSON templates,
and REST APIs. To increase or decrease the number of VMs in the scale set, we
can change the capacity property and redeploy the template. A virtual machine
scale set is created inside VNET, and individual VMs in the scale set are not
allocated with public IP addresses.
Any virtual machine that we deploy and is the part of the virtual machine scale
set will not be allocated with a public IP address. Because sometimes, the virtual
machine scale set will have a front end balancer that will manage the load, and
that will have a public IP address. We can use that public IP address and connect
to underlying virtual machines in the virtual machine scale set.
The first step in auto-scaling is to select a metric or time. So, it can be a metric
based auto-scaling, or it can be a schedule based auto-scaling. The metrics can be
CPU utilization, etc., and the time can be like the night at 6 o'clock till morning
6:00, we want to reduce no of servers. We can have a schedule based auto-scaling.
In case if we're going to reach according to load, then we can use metric based
auto-scaling.
The next step in the auto-scaling is to define a rule with the condition. For
example - if the CPU utilization is higher than 80 percent, then spin off a new
instance. And once the condition is met, we can carry some actions. The actions
can be adding or removing virtual machines, or it can be sending email to a system
administrator, etc. We need to select whether it is a time-based auto-scaling or
metric-based, and we need to choose the metric. We define the rule and actions
that need to be triggered when the condition in that rule is satisfied.
o Compute metrics: The available metrics will depend upon the installed
operating system. For windows, we can have a processor, memory, and
logical disk metrics. For Linux, we can have processor, memory, physical
& network interface metrics.
o Web Apps metrics: It includes CPU & memory percentage, Disk & HTTP
queue length, and bytes received/sent.
o Storage/ Service bus metrics: We can scale by Storage queue length,
which is the number of messages in the storage queue. Storage queue
length is a particular metric, and the threshold applied will be the number
of messages per instance.
o We can use the Azure portal to create a scale set and enable auto-scaling
based on a metric.
o We can provision and deploy VM scale sets using Resource Manager
Templates.
o ARM templates can be deployed using Azure CLI, PowerShell, REST,
and also directly from Visual Studio.
Step 1: Go to Azure Marketplace and type in the Virtual Machine scale set. Then
click on Create.
Step 2: We need to give a name to this scale set. And fill all the other required
details, as shown in the figure below. Then click on create.
Step 3: Now, your Virtual Machine scale set is successfully deployed. To view
VMSS, you can go to resources.
Step 4: Now, click on scaling. Provide an auto-scale setting name. And select the
resource group.
Step 5: Scroll Down, and you will find two ways to auto-scale. First, click on
"add a rule? for auto-scaling based on the metric. We are going to scale our virtual
machine if the average percentage of CPU utilization is above 70 percent.
Step 6: Now, select the time and date based scaling, where you can scale when
you need more space. And the last thing is Notify, where you get notified
whenever the auto-scaling gets triggered.
CHAPTER 7:CLOUD STORAGES
Azure Storage Account
General- Blob, File, Standard, Hot, Cool, LRS, ZRS, Resource Encrypted
purpose V2 Queue, Premium Archive GRS, RA- Manager
Table, and GRS
Disk
Blob Blob (block Standard Hot, Cool, LRS, GRS, Resource Encrypted
storage blobs and Archive RA-GRS Manager
append
blobs only)
Standard performance: This tier is backed by magnetic drives and provides low
cost/GB. They are best for applications that are best for bulk storage or
infrequently accessed data.
(So every virtual machine disk will be stored on a storage account. So, if we are
associating a disk, then we will go for the premium storage. But if we are using
storage account specifically to store blobs, then we will go for standard
performance.)
Access Tiers
Cool Storage: It is optimized for storing data that is infrequently accessed and
stored for at least 30 days.
Archive Storage: It is optimized for storing files that are rarely accessed and
stored for a minimum of 180 days with flexible latency needs (on the order of
hours).
When a user uploads the document into the storage, the document will initially
be frequently accessed. During that time, we put the document in the hot Storage
tier.
But after some time, once the work on the document is completed. Nobody
generally accesses it. So it will become infrequently accessed document. Then
we can move the document from Hot storage to Cool storage to save the cost
because cool storage is built based on the number of times the document is
accessed. Once the document is matured, i.e., once we stopped working on that
document, the document becomes old. We rarely refer to that document. In that
case, we put it in cool storage.
But for six months or one year, we don't want the document to be referred to in
the future. In that case, we will move that document to archive storage.
So hot storage is costlier than cool storage in terms of storage. But cool storage
is more expensive in terms of access. Archive storage is used for archiving the
documents into storage, which is not accessed.
Azure Storage Replication is used for the durability of the data. It copies our data
to stay protected from planned and unplanned events, ranging from transient
hardware failure, network or power outages, and massive natural disasters to
man-made vulnerabilities.
Azure creates some copies of our data and stores it at different places. Based on
the replication strategy.
ZRS (Zone-Redundant Storage): The data will be replicated across data centers
but within the region. In that case, the data is always available within the data
center, even if one node is not available. OR we can say that the data will be
available also if the entire data center goes down because the data is already
copied in another data center within the region. However, if the region itself is
gone, then you will not get the data access.
Whenever we create a storage account, we will get an endpoint to access the data
within the storage account. So each object that we stored in Azurestorage has an
address, which includes your unique account name and the combination of an
account name, and service endpoint, which forms the endpoint for your storage
account.
In case if we want to map our custom domain for these, we can still do that. We
can use our custom domain in reference to these storage service endpoints.
Let's see how to create a storage account in Azure portal and discuss some of the
important settings associated with the storage account:
Step 1: Login to your Azure portal home screen and click on "Create a resource".
Then type-in storage account in the search box and then click on "Storage
account".
Step 2: Click on create, you will be redirected to Create a storage account
window.
Step 3: First, you need to select the subscription whenever you are creating any
resource in Azure, and secondly, you need to choose a Resource Group. In our
case, the subscription is "Free Trail".
Use your existing resource group or create a new one. Here we are going to create
a new resource group.
Step 4: Then, fill the storage account name, and it should be all lowercase and
should be unique across all over Azure. Then select your location, performance
tier, Account kind, Replication strategy, Access Tier, and then click on next.
Step 5: You are now on the Networking window. Here, you need to select the
connectivity method, then click next.
Step 6: You are now on the Advanced window were you need to enable or disable
security, Azure files, Data Protection, Data lake Storage and then click next.
Step 7: Now, you are redirected to the Tags window, where you can provide tags
to classify your resources into specific buckets. Put the name and value of the tag
and click next.
Step 8: This is the final step where the validation has been passed, and you can
review all the elements that you have provided. Click on create finally.
Now our storage account has been successfully created, and a window will appear
with the message "Your deployment is complete".
Click on "goto resource", then the following window will appear.
You can see all the values that you have selected for different configuration
setting when creating the storage account.
Activity Log: We can view an activity log for every resource in Azure. It
provides the record of activities that have been carried out on that particular
resource. It is common for all the resources in Azure.
Access Control: Here, we can delegate access for the storage account to different
users.
Tags: We can assign new tags or modify the existing tags here. We can also
diagnose and solve the problems in case if we have any problems.
Events: We can subscribe to some of the events that are happening within this
storage account, it can be either logic app or function. For example, a blob is
created in a storage account. That event will trigger a logic app with some
metadata of that blob.
Storage explorer: This is where you can explore the data that is residing in your
storage account in terms of blobs, files, queues, and tables. Again there is a
desktop version of this storage Explorer which you can download and connect
also, but this is more of a web version of it.
Access Keys: We can use it to develop applications that will access the data
within the storage account. However, we might not want to give access to this
access key directly. We may wish to create SAS keys. Here, we can generate
specific SAS keys for a limited period, with limited access. Then provide that
SAS signature to our developers. Another way is the access keys. Access key
gives blanket access. So we recommend not to give access of the access keys to
anyone other than the one who created that storage account.
(SAS) Shared access signature: Here, we can generate the SAS keys with the
limited access and for the limited period, and provide that information to
developers who are developing applications using the storage account. SAS is
used to access data that is stored in the storage account.
Firewalls and Virtual network: Here, we can configure the network in such a
way that the connections from certain virtual networks or certain IP address
ranges are allowed to connect to this storage account.
And we can configure advanced threat protection and can make the storage
account compatible to host a static website
Properties: Here we can see the properties related to the storage account
So these are the different settings we can configure, and the rest of the settings
are related to different services within the storage account - for example, blob,
file, table, and queue.
CHAPTER 8: CLOUD APP SERVICE
Azure App Services
The most fundamental building block of Azure App Service is the App Service
plan or App Service environment.
There are two types of hosting environments within App Service. App Service
plan and App Service environment. App Service Environment is a more
sophisticated version of the App Service plan and comes with a lot more
features when compared to the App Service plan. Within these, we can host
several Apps like - web applications, web jobs, batches, APIs, and mobile
backend services that can be consumed from our mobile Front-End.
Other related services are closely related to these apps within the App service
plan. Those related services are a notification hub that we can use to push
notifications into mobile devices. We can use Mobile engagement to carry out
Mobile analytics.
Apart from these related services, there is one more service, which is very
important when it comes to APIs, which is API management. API management
can act as a wrapper around our API apps when we're exposing those APIs to
the outside world. It comes with a lot of features such as throttling, security, and
it will be beneficial if we want to commoditize our APIs and sell it to the
outside world.
To enable communication between apps in the App Service plan and apps
installed on virtual machines within the virtual network. There are two ways we
can do it. One way is to establish Point-to-site VPN between apps in the App
Service plan and virtual network via which the apps can communicate with each
other. And the second way is if we have the App service environment. Because
it will get deployed into a virtual machine by itself, the Apps within that App
Service environment can seamlessly communicate with the apps installed on
virtual machines within the virtual network.
And finally, there are two important things. The first one is security, and the
second one is monitoring to secure and control the App services environment.
An app service plan denotes a set of features and capacity that we can share
across multiple apps in the same subscription and geographical region. A single
or dual app can be configured to run on the same computing resources.
Environment features
o Development frameworks: App Service supports a variety of development
frameworks, including ASP.NET, classic ASP, node.js, PHP, and Python-
all of which run as extensions within IIS.
o File access
o Local drives - Operating system drive (D:\drive), an application
drive and user drive (the C:\ drive)
o Network drives - Each customer's subscription has a reserved
directory structure on a specific UNC share within a data center.
o Network access: The application code can use TCP/IP and UDP based
protocols to make outbound network connections to access Internet
endpoints that expose external services.
Azure App Service Web Apps is a service for hosting web applications. The key
feature of App Service Web Apps.
Step 1: Click on create a new resource and search for App Service Plan to
create it.
Step 2: Fill-in all the required details and select the SKU size, as shown in the
figure below. Then click on create.
Step 3: Your app service plan will be created. You can now explore and modify
it as per your requirement.
Azure Web App service lets us quickly build, deploy, and scale enterprise-grade
web, mobile, and API apps running on any platform. It helps us to meet rigorous
performance, scalability, security, and compliance requirements while using a
fully managed platform to perform infrastructure maintenance.
Creating a Web App and deploying an application into Azure web App from
visual studio
Step 1: Click on create a resource and type in the web app. After that, click on
the web app and then click on Create.
Step 2: You are now on the Web App creation page. Fill-in, all the required
details, then click on review+create.
Step 3: Click on create, then you will be redirected to the following page.