You are on page 1of 83

CHAPTER 1: INTRODUCTION OF CLOUD

1. History of Cloud Computing


2. Basic Concepts
3. Benefits
4. Public cloud provider
5. Characteristics of cloud

CHAPTER 2: CLOUD COMPUTING ARCHITECTURE AND


COMPONENTS
1. Front End
2. Back End
3. Important Components of Cloud Computing Architecture
4. Benefits of Cloud Computing Architecture

CHAPTER 3: CLOUD SERVICE MODELS


1. SaaS
2. PaaS
3. IaaS
4. FaaS
5. NaaS

CHAPTER 4: RESOURCE GROUPS


1. Why we need resource group
2. How To Use Azure Resource Groups
3. Benefits of Azure Resource Manager
4. Understand Management Scope
5. Azure Resource Manager core concepts

CHAPTER 5: CLOUD NETWORKING


1. VNets
2. IP addresses
3. Private IP addresses
4. Public IP addresses
5. Subnets
6. Network interfaces
7. DNS

CHAPTER 6:VM’S
1. What are virtual machines?
2. Types of Virtual Machine
3. Identifying workloads for Azure Virtual Machines
4. Virtual machine sizing
5. two tiers

CHAPTER 7:CLOUD STORAGES


1. Storage account
2. Storage types:
3. Controlling access to storage

CHAPTER 8: CLOUD APP SERVICE


1. Web Apps
2. Mobile Apps
3. API Apps
4. Logic Apps

CHAPTER 9: CLOUD LAB RESOURCES DEPLOYEMTS


1. Creating resource groups
2. Creating VNET
3. Creating storages
4. Creating VM
5. Creating availability zone and load balancing

PREPARED BY: ABDULKADIR GELLE


Chapter One History of Cloud Computing
History of Cloud Computing

Cloud Computing was invented in the early 1960s by J.C.R Licklider (Joseph
Carl Robnett Licklider), an American Psychologist and Computer Scientist.
During his network research work on ARPANet (Advanced Research Project
Agency Network), trying to connect people and data all around the world, gave
an introduction to Cloud Computing technique which we all know today. Born
on March 11th, 1915 in St. Louis, Missouri, US, J.C.R Licklider pursued his
initial studies from Washington University in 1937 and received a BA Degree
with three specializations including physics, maths, psychology. Later in the
year 1938, Licklider completed his MA in psychology and received his Ph.D.
from the University of Rochester in the year 1942. His interest in Information
Technology and looking at his years of service in different areas and
achievements, made his appointed as Head of IPTO at ARPA (US Department
of Defense Advanced Research Project Agency) in the Year 1962. His aim led
to ARPANet, a forerunner of today’s Internet.

The beauty of the cloud computing phase went on running throughout the era of
the 21st Century. By mid-2000s probably 2006 Amazon created AWS (Amazon
web services) and also noted its Elastic Computing Cloud (EC2). By 2008,
Google too introduced its beta version of the search engine. Earlier announced
by Microsoft in the year 2008, it released its cloud computing service named
Microsoft Azure for testing, deployment and managing applications and
services. In the year 2012, Google compute engine was released but was rolled
to the public by the end of Dec 2013. Oracle introduced Oracle Cloud with three
primary services for business (IaaS, PaaS and SaaS). Currently, as per records,
Linux and Microsoft Azure share most of their work parallel.
Basic Concepts of the cloud

Cloud computing has now become an ideal way to deliver solutions and
enterprise applications for different businesses across the globe. The History of
Cloud Computing started in the early 1960s. During this period the concepts of
time-sharing took a rise via Remote Job Entry. This terminology was associated
with IBM and DEC (Digital Equipment Corporation). Due to this growth, full
time-sharing systems were available by the early 1970s. By the 1990’s, few
telecommunication giants started offering VPN (Virtual private network)
services at affordable costs. As they could do by switching traffic with proper
server use, it made them use the overall network more effectively. By 1994, the
cloud metaphor was started to be used for virtualized services.

Why Cloud Computing was Invented?


The Defense Advanced Research Project Agency in the year 1963, funded $2
million for a project which included developing a technology that allows a
computer to be used by two or more persons simultaneously. Here giant
computers were used, where reels of magnetic tape for memory and a
forerunner, currently named “cloud computing” were taken into consideration.
It acted as a cloud giving access to max 3 peoples to connect. In the vision of
expansion, J.C.R Licklider in the year 1969, developed the ARPANet (known
as the primitive version of the internet). He advanced his vision named
Intergalactic Computer Network, in which anyone on the globe can be
interconnected by means of computers and also access the information from
anywhere and anytime. The term coined virtualization in the year 1970s made a
shift, which now describes the creation of a virtual machine that acts like a fully
functional real computer system. The most use of virtual computers in the
1990’s and business offering virtual service led to the development of cloud
computing infrastructure.

Benefits of cloud

1. Implementing a Cloud-based server or being on the cloud makes you


cost-effective as it will easier to access organization data and information
which saves time and investment in the early phase. Moreover, the cloud
has facilities for storage area too, which means pay only for the storage
you have installed. Revenue growth also increases investing in cloud
applications.
2. Security Concern is a big issue when it comes to onsite viewing and
storage of files and documents. As its cybercrime doing the same thing
for remotely accessing data. The Key to enhance security is one by
encrypting the data that is being transmitted over networks. Moreover,
different security features are available to the users too.
3. Rather than hosting an application in Local Server, Cloud gives a lot of
flexibility on hosting on its platform. It increases the overall efficiency of
the organization.
4. Cloud computing provides greater mobility and connectivity to connect
with people and information. It allows to access data from anywhere,
anyplace and anytime.
5. Clouds based applications update and upgrade their software versions
without being manually functioned.
6. Cloud storage helps in loss prevention from data. If any valuable data is
stored in local hardware and due to any viral infections, malware or
malfunctions the data gets corrupted. To prevent loss, it’s better to dump
all data in the cloud which gives better flexibility and access anytime.
Public Cloud Providers
The service provider where the cloud service is made public through internet so
that the users can use the storage or applications or varied capacities of the
providers to scale the resources and to share it among the peers in the same
organization. In order to reduce the server’s capabilities and to develop the data
being available in the cloud is called public cloud providers.

Top 7 Public Cloud Providers

1. Amazon Web Services

Amazon Web Services is the cloud services offered by Amazon for companies
and individuals on the basis of pay-as-you-go services. It offers infrastructure,
platform, and software as services to the users. AWS has become synonymous
with public cloud providers as the platform offers various services globally.
AWS offers three main products such as EC2, Glacier and S3. EC2 offers
virtual machine services to the users whereas Glacier is the low-cost cloud
storage service. S3 is used for Amazon Storage system. Database storage and
computing power with its fast delivery make AWS popular among the users of
public cloud. Any businesses can easily adapt AWS and this is an advantage
along with its scalability.
2. Microsoft Azure

Azure is the best competitor for AWS with its various services and different
ways to achieve the same. The data centers are managed by Microsoft and it
helps in deploying, managing, building, and testing the applications in favor of
the user. It also has the infrastructure, platform, and software as a service
similar to AWS and offers services in analytics, virtual computing, and
networking along with storage. Along with running applications, it also helps to
upgrade the applications and improve the applications to the maximum
capabilities. The services can be scaled up or down based on the needs and the
cost incurred also depends on the services you use. It is the only cloud
computing platform that can be learnt without the knowledge of programming
language.

3. Google Cloud Platform

This competition between tech giants to offer cloud platforms is a benefit for
the users as the users can know various platforms for cloud services and can use
the one which is more suiting the needs. GCP uses the same infrastructure as
google uses and can reach the users globally. The application can be built and
ran in the GCP with the help of data centers and analytics installed in the
service. Cost is managed in the platform and the data storage can be managed
efficiently. Along with cloud management, networking, artificial intelligence
and security is other services offered in GCP. Any data sets, be it structured or
unstructured can be managed on this platform. It uses infrastructure as a service
and integrated with the Linux operating system to incorporate the scripts and
manage the cloud operating services and offer it to the user. If the user has any
SQL-based or Microsoft-based units in the services, these cannot be used in
GCP. And this stays as a disadvantage for the platform to outgrow other cloud
services.
Other public cloud providers are not that prominent among users but still being
used by a small fraction of people. They are discussed below.

4. Oracle Cloud

The cloud service offered by Oracle company to build, test, and deploy the
applications so that users of oracle worldwide can use the servers and
applications is called Oracle cloud. Though not a common cloud service
preferred by users as such, it has infrastructure and platform services that please
the users with its services of network and storage. High capacity applications
can be easily used in the cloud service and it can be customized easily so that
other applications can be used well with Oracle application. The database
offered by Oracle is SQL and it has a great many users. Transferring data from
SQL to oracle cloud is not a big challenge. Also, oracle offers the users to
transfer the data out of its cloud with free charges. It charges the customers only
for object storage services. Cloud ERP is different from other cloud services
and improving the cloud ERP can bring the Oracle cloud to the front circle of
cloud providers.

5. IBM Cloud

The cloud services offered by IBM company is called IBM cloud where the
platform as a service and infrastructure as a service is offered to the users.
Virtual resources can be deployed and used by the users which make the storage
and networking easily accessible. Even for higher workloads, the cloud service
is reliable and this makes the IBM cloud a good one among others. Agile
methodologies are used in IBM so that data transfer happens faster from one
platform to another. Some open-source technologies are used in the IBM cloud
to make the DevOps operation working.
6. Alibaba Cloud

Offered by Alibaba, Alibaba cloud emerges as a reliable service provider that


integrates different services with secure technology. Data storage, big data
processing, and relational databases are the major services offered by this cloud
service. SQL is the database used by Alibaba. Scalability, availability, and
efficiency is the major advantages of Alibaba cloud. Digital transformation is
done easily at a low price with the help of Alibaba cloud services. Data
management and cloud computing is done in low costs to attract users .

7. Salesforce Cloud

A cloud platform that supports customer relationship management so that sales


and marketing is focused more is called the salesforce cloud. Business-to-
business and business-to-customer contexts are supported in this cloud platform.
All the applications based on sales are offered in this cloud platform. Extra
features are offered with additional fees and this helps the cloud service to grow
and flourish in the sales market.

Characteristics of cloud

1. On-demand self-service. Cloud services are generally provisioned as


they are required and need minimal infrastructure configuration by the
consumer. As a result, users of cloud services can quickly set up the
resources they want, typically without having to involve IT specialists.
2. Broad network access. Consumers generally access cloud services over
a network connection, usually either a corporate network or the Internet.
3. Resource pooling. Cloud services use a pool of hardware resources that
consumers share. A hardware pool consists of hardware from multiple
servers that are arranged as a single logical entity
4. Rapid elasticity. Cloud services scale dynamically to obtain additional
resources from the pool as workloads intensify, and they release
resources automatically when no need for them exists
5. Measured service. Cloud services generally include a metering
capability, making it possible to track relative resource usage by the users
of the services, or subscribers
CHAPTER 2: CLOUD COMPUTING ARCHITECTURE AND
COMPONENTS

Cloud Computing Architecture

Cloud Computing Architecture is divided into two parts, i.e., front-end and
back-end. Front-end and back-end communicate via a network or internet. A
diagrammatic representation of cloud computing architecture is shown below:

Cloud Computing Architecture


Front-End

 It provides applications and the interfaces that are required for the cloud-
based service.

 It consists of client’s side applications, which are web browsers such as


Google Chrome and Internet Explorer.

 Cloud infrastructure is the only component of the front-end. Let's understand


it in detail.

Front-end - Cloud Computing Architecture

 Cloud infrastructure consists of hardware and software components such as


data storage, server, virtualization software, etc.

 It also provides a Graphical User Interface to the end-users to perform


respective tasks.
Back-End

It is responsible for monitoring all the programs that run the application on the
front-end

It has a large number of data storage systems and servers. The back-end is an
important and huge part of the whole cloud computing architecture, as shown
below:

Back-end - Cloud Computing Architecture

Components Of The Back-End Cloud Architecture

Application

 It can either be a software or a platform


 Depending upon the client requirement, the application provides the result to
the end-user (with resources) in the back end

Service

 Service is an essential component in cloud architecture

 Its responsibility is to provide utility in the architecture

 In a Cloud, few widely used services among the end-users are storage
application development environments and web services

Looking forward to enhancing your cloud computing skills? Enroll in our Cloud
Computing Certification Course today and take your career to new heights.

Storage

 It stores and maintains data like files, videos, documents, etc. over the
internet

 Some of the popular examples of storage services are below:

 Amazon S3

 Oracle Cloud-Storage

 Microsoft Azure Storage

 Its capacity varies depending upon the service providers available in the
market

Management
 Its task is to allot specific resources to a specific task, it simultaneously
performs various functions of the cloud environment

 It helps in the management of components like application, task, service,


security, data storage, and cloud infrastructure

 In simple terms, it establishes coordination among the cloud resources

Security

 Security is an integral part of back-end cloud infrastructure

 It provides secure cloud resources, systems, files, and infrastructure to end-


users

 Also, it implements security management to the cloud server with virtual


firewalls which results in preventing data loss

Benefits of Cloud Computing Architecture

The cloud computing architecture is designed in such a way that:

 It solves latency issues and improves data processing requirements

 It reduces IT operating costs and gives good accessibility to access data and
digital tools

 It helps businesses to easily scale up and scale down their cloud resources

 It has a flexibility feature which gives businesses a competitive advantage

 It results in better disaster recovery and provides high security


 It automatically updates its services

 It encourages remote working and promotes team collaboration

Going ahead, let’s have a look at the components of cloud computing


architecture.

Cloud Computing Architecture Components

Some of the important components of Cloud Computing architecture that we


will be looking into are as follows:

 Hypervisor

 Management Software

 Deployment Software

 Network

 Cloud Server

 Cloud Storage

Virtualization and Cloud Computing


The main enabling technology for Cloud Computing is Virtualization.
Virtualization is the partitioning of a single physical server into multiple logical
servers. Once the physical server is divided, each logical server behaves like a
physical server and can run an operating system and applications independently.
Many popular companies like VMware and Microsoft provide virtualization
services. Instead of using your PC for storage and computation, you can use
their virtual servers. They are fast, cost-effective, and less time-consuming.

For software developers and testers, virtualization comes in very handy. It


allows developers to write code that runs in many different environments for
testing.

Virtualization is mainly used for three main purposes: 1) Network


Virtualization, 2) Server Virtualization, and 3) Storage Virtualization

Network Virtualization: It is a method of combining the available resources in


a network by splitting up the available bandwidth into channels. Each channel is
independent of others and can be assigned to a specific server or device in real
time.

Storage Virtualization: It is the pooling of physical storage from multiple


network storage devices into what appears to be a single storage device that is
managed from a central console. Storage virtualization is commonly used in
storage area networks (SANs).

Server Virtualization: Server virtualization is the masking of server resources


like processors, RAM, operating system, etc., from server users. Server
virtualization intends to increase resource sharing and reduce the burden and
complexity of computation from users.

Virtualization is the key to unlock the Cloud system, what makes virtualization
so important for the cloud is that it decouples the software from the hardware.
For example, PCs can use virtual memory to borrow extra memory from the
hard disk. Usually, a hard disk has a lot more space than memory.
CHAPTER 3: CLOUD SERVICE MODELS

Cloud service models


If you’ve read our first post of this series What is cloud computing?, you should
have an idea about the different cloud types (public, private, and hybrid), you’ll
need to know which cloud service model you would like to deploy within it.
There are many different service models available for the cloud, with more
being defined all the time. The three most common models are Software as a
Service, Platform as a Service, and Infrastructure as a Service. Each
provides a different level of manageability and customization for your
solution.

SAAS
SAAS (Software as a Service) Cloud refers to the software licensing and
delivery model where third-party cloud providers host applications and services
over an internet platform and the user requires a license for using those software
services. SAAS is one of the topmost cloud computing services that gains its
importance after the late 1990s extending the idea of the ASP model. It is
typically based on the web and from 2012, SAAS vendors continue to develop
and manage their own software. It has been widespread because of its
flexibility, as it requires only a web browser to run.

What is SAAS Cloud?


This represents the largest cloud application services that are growing
immensely in today’s digital market. As far as the name goes, it provides the
necessary software as a service to the user that is managed by some third-party
services. The software applications provided can be directly run on any web
browser without any need for installing the software. SAAS is basically a web-
based model that produces software on-demand, cloud vendors host applications
on the server, maintain databases and transmit data over the internet to the end-
user.

The SAAS vendor offers a great advantage over other service providers by
allowing the buyers to outsource most of the IT responsibilities without having
to invest in hardware platforms to host the software. The apps can be accessed
by users via a web browser that has been a delivery model for various business
applications including office software, management software, accounting, CRM
tool, and various talent acquisition teams from various departments. It can be
categorized into two types:

1. Vertical SAAS
2. Horizontal SAAS

SAAS Cloud has more benefits that will be discussed below which implements
all the features of cloud computing making a robust experience
Why We Use SAAS Cloud?
SAAS Cloud is a technology that rents software-based cloud platform services
and is also a replacement model for such cloud platform services that the
organization can access. For simplicity and reliability, anyone ranging from
small to large-scale business use the technology of SAAS that needs an internet
connection to run with the help of a web browser. So, we prefer using the SAAS
cloud because of various reasons. I will discuss some of the basic points which
are as follows:

 For any scaled enterprise, they feel the necessity of using the SAAS cloud
because of the low cost of services. We can purchase as much software as
we want of our choice and will pay according to that.
 In SAAS, we do not require any IT specialist team to work for. So during
a shortage of resources, or when application development and
maintenance become a big issue, SAAS comes into the picture.
 SAAS application knows about its software creation that includes a bunch
of experienced professionals and we can rely upon them.
 Using the SAAS cloud, everything is processed and stored on the cloud
itself that makes it easier to store hundreds and thousands of files and can
access the same from the cloud whenever we want.

Advantages of SAAS Cloud


There has been a significant paradigm shift in technology while using cloud
services. If we talk about the SAAS cloud, it also offers quite a wide variety of
advantages to make our life so much easy and worthy. We will see the added
advantages which are given below:
 Easy to afford services: SAAS cloud model offers subscription-based
services including service cost, maintenance, up-gradation cost, thus
lowering the entire cost that is present on old traditional systems. Thus
because of its affordability and wide range of features, many
organizations started using SAAS.
 Fast deployment: For using SAAS services, we need only a web browser
and internet connection. SAAS solutions omit the headache of installing
software on the system. Using a fast, stable internet connection, accessing
software will be very fast and components can be deployed instantly.
 No setup required for infrastructure: We, the users do not have to
worry about the cloud infrastructure or hardware cost. SAAS cloud
vendors handle everything. So there is no infrastructure cost as the SAAS
vendor takes up all the responsibilities of maintaining the infrastructure as
well as its configuration.
 Fast upgrades: SAAS cloud supports on-demand updates and hardware
updates when needed. The upgrades are very fast as it eliminates the
downloading of software and patches. The systems are upgraded or
downgraded as per the user’s choice thus making our life so much easier.
 Data backup and security: SAAS also takes the responsibility of
backing up data on a daily, weekly, or monthly basis that can be found
very fruitful at times. SAAS solutions also initiate automatic backup
maintaining data integrity and security making the user tension-free.
 Flexibility: Users get a chance to access SAAS services from anywhere
around the globe. It makes the life of job seekers who used to love doing
work from home a peace for them. It only requires a strong internet
connection to be connected to cloud services.
Disadvantages of SAAS Cloud
Thus, most companies prefer working with the SAAS cloud platform as it
provides excellent features and works flexibility. Despite having many
advantages, the SAAS cloud also has some disadvantages. They are:

 Data security breach: Since SAAS cloud runs mainly on an internet


platform and there is only one server where all data gets collected and
stored, there is a chance of data security breach. Unauthorized access and
misuse of data can bring a big threat to the organization.
 Termination of Service: Business Organizations that use SAAS
applications can lose their data if the cloud provider terminates their
services due to lawsuits and various other reasons.
 Performance issues: Organizations that use SAAS that runs on slow
internet speed can face problems of performance issues including daily
backup, synchronizing apps and services.

PaaS
is a type of cloud computing product in which a service provider provides
customers with a platform that enables them to build, operate and manage
business applications without the infrastructure needed for such software
development. Because developers and other users do not see the underlying
infrastructures, the PaaS Architectures are similar to serverless principles,
where a cloud service provider owns, operates the server, and controls resource
allocation.

What is PaaS?
Platform as a service is a cloud-based development model that enables the user
to deliver starting from simple applications needed in day-to-day life to
centralized applications required for big organizations.
 The working procedure of PaaS is to provide a smooth working principle
in the cloud that includes the entire application development phase in the
cloud.
 In the PaaS cloud computing platform, the cloud service provider at the
back end handles scalability and the end-user does not have to worry
about managing the infrastructure.
 With the help of PaaS, we also get additional resources, including
database management systems, programming languages, libraries, and
various software development tools that work on the cloud and made our
daily life better.
 It has varieties of usefulness and among them, it cuts down the price and
headache of installing extra software licenses, core applications, and
other platform resources.
 The architectures are used to hide their underlying infrastructure from
developers and other end users. As a result, the model becomes a
serverless computing model and function-as-a-service architecture where
the cloud service provider manages and runs the server and controls the
distribution of resources.
 It helps us to organize and maintain useful applications and services
whereas third-party providers maintain every other service in the cloud.

How PaaS Works?


Platform-as-a-Service, is one of the best cloud computing technology after IaaS
that is less expensive, has a well-equipped management system, and can beat
any other old traditional cloud system.

Nevertheless, how exactly does it work? I will provide some basic points that
will contain the working principles :
 PaaS is actually a sandwiched layer between SaaS and IaaS that contains
all the middleware platform tools.
 PaaS provides a pay-per-use feature. Therefore, without having to buy,
configure, develop, maintain and install every application to work and use
for their purpose that also requires a strong maintenance team to maintain
them and service them on a daily basis, one can easily just use and pay
for that price that they require for their usage. That is why for proper and
best utilization, one can pay only for their usage of their applications.
 There are various types of PaaS service providers are present that has
very useful features. Those are:

1. PaaS provides the user a basic data storage and a server for maintaining
all computing systems that are required for servicing.
2. Support the use of powerful web engines and platforms including Google
applications.
3. Support social networking sites like Facebook.
4. Daily essential commodities that really make changes to everyday life.

What’s included in PaaS?


It includes a variety of features and services. The main offerings included by the
vendors are:

 Development Tools: PaaS vendors offer necessary tools for software


development, including a debugger, a compiler, and other essential tools
that work together as a framework. The specific tools offered will depend
on the vendor, but PaaS offerings include everything a developer needs to
build their application.
 Middleware: Platforms offered as a service include middleware so that
developers do not have to build it by themselves. Middleware is software
that is sandwiched between user applications and the machine’s operating
system. Middleware is necessary for running an application.
 Operating Systems: A PaaS vendor maintains the operating system that
developers work on and the application runs on.
 Database Management: PaaS administer and maintain a database
system. They will usually provide developers with a database
management system as well.
 Infrastructure: PaaS is the next layer up from IaaS and has everything
included in IaaS. A PaaS provider either manages servers, storage or
purchases them from an IaaS provider

Pros and Cons of PaaS


There are many advantages. Among them, I have listed down a few of those

 Simplified Development: Programmers can focus on development and


innovation without worrying about the infrastructure and cloud-
computing tools minimize the time by taking less effort and using smart
work using less code as possible to build the apps.
 Flexibility and Portability: Some PaaS service providers give the user
lots of choices for using multiple platforms, such as PCs, laptops and
other electronic devices in developing apps quicker and make it portable.
 Affordability: PaaS brings affordability for individuals or organizations
to use cloud software of their choice without having to install extra
software or shell out the extra cost in installing and maintaining software.
 Collaboration with Development Teams: Since applications made
using PaaS can be accessed over the Internet, teams can work together
globally irrespective of any locations.
 Efficiency: PaaS has been efficiently managing application development
phases in the cloud that includes testing, managing and updating apps at
regular intervals within the same cloud server and providing a quality and
efficient infrastructure.

There are a few disadvantages of PaaS that I will discuss right now:

 Data Privacy: Data is a big risk and is kept private and stored in the
server most of the time to maintain data privacy.
 Integration: There are chances that data mismatch can happen while
integrating data because data are stored both in local storage and cloud.
So it is very difficult to differentiate them and the users felt difficulty
while accessing the data whenever they want.

IaaS
Introduced in the year 2012 by Oracle, IaaS is a cloud computing platform
based model, known as Infrastructure as a service. Formerly IaaS was termed as
Hardware as a Service (HaaS). Cloud computing has three distinct layers such
as PaaS, SaaS, out of which IaaS is one. Here the customer organization
outsources its IT Infrastructures, where customers access resources such as
(server, storage, virtual machines) over the internet. Those resources are
accessible in the cloud computing platform based on pay per use model. Here
the clients are billed only for the services used, further, it helps to reduce cost
and complexity of investing and managing a physical server.

What is IaaS?

 IaaS defined as “Infrastructure as a service”, provides visualized


computing features to the clients through the internet. Also known as
hardware as a service, it provides resources that belong to virtualized
hardware termed as computing infrastructure that offers virtual servers
storage, networking connections and IP address.
 IaaS cloud computing is distinguished into three models namely public,
private and hybrid cloud. Public cloud states its location in a cloud
computing platform, whereas in the case of private cloud the
infrastructure is stored in customer place. In Conclusion, the hybrid cloud
is the combination of both models where the customer chooses the best
alternative.
 In detail, Public clouds are a common method of deploying cloud
computing. Microsoft Azure is an example of a public cloud where cloud
utilities like storage and servers are operated by third-party buyers and
delivered over the web.
 The third-party buyers are the cloud service providers who own the
hardware, software and several infrastructures. Public cloud costs the
users less as no physical hardware or software needs to be purchased, the
user has only to pay for the service he owns. Moreover, it provides high
reliability and scalability with negligible maintenance.
 Private Cloud as its name says it comprises of resources to be used by one
organization only. The private cloud can be installed physically at
organization premises or else can be hosted by third-party cloud service
providers. Private clouds are mostly used by government agencies,
institutions, etc. This model provides more flexibility, greater security,
and high scalability.
 Hybrid Cloud is comprised of both public cloud and private cloud. In a
hybrid cloud, data and information can be transferred for better
flexibility. Here we can say that the public cloud is used for where less
security persists like webmail, whereas private cloud used when security
is at peak and for critical business operations. A hybrid cloud is used for
better control and cost-effectiveness that it provides.
How does IaaS Works?

IaaS points to an infrastructure either physical or virtual that is provided by the


cloud provider. IaaS has lots of resources such as network, server, storage,
virtualization, so it depends upon the customer to choose its resources wisely
and as per need. Apart from the management of the infrastructures, it provides
billing management too, where the user is billed as per the services rendered.

Benefits of IaaS

IaaS offers lots of benefits to its customers ensuring affordability and business
growth. Given below are few benefits:

 IaaS provides a “pay per use” scheme, which says the services provided
can be used as per needed, and users need to pay only for the services
utilized.
 The services provided by IaaS are quite scalable as they make sure that
the resources are available to the users at the desired time and demand
and also ensure that there is no wastage of capacity if left out.
 It saves a lot of time and cost as the cloud service provider is responsible
for setting and maintaining the physical hardware.
 The service provided is unaffected and remains constant, even though any
hardware failure occurs.
 It provides a great value of flexibility as it scales down the resources
quickly as per the needs of customers.
 IaaS mostly focuses on business growth as there any hardly any time is
spent on technological and business decisions and how to maintain the
infrastructure. IaaS takes care of all of these.
 IaaS cloud computing platform provides secure data centers with
24/7/365 availability.
Examples of IaaS

 IaaS is highly flexible and it’s basically used in Ecommerce and Non-e-
commerce platforms.
 A suitable example is Amazon Web Services (AWS EC2), where it
provides scalability to host cloud-based applications, Moreover EC2
users need not have physical servers, AWS provides a virtual
environment to work on. In this, the cost is minimized and the users only
pay for the services they booked.
 In the Case of the E-commerce platform, it depends upon the user’s
interest in hosting the applications either on cloud or on-premise. Here
also the users pay for the services actually used (i.e. hosting plan for the
server).
 A virtualized Data Center is established to provide cloud hosting options,
integrating the cloud operations. The Data Center contains several Virtual
servers that meet the user demand as per their business requirements.
 Another cloud computing service provider known as Digital Ocean,
founded in the year 2011, which provides IaaS (Infrastructure as a
Service) for open source developers. Mostly Digital Ocean provides
droplets, where a developer can resize the droplets after creating them.
Developers can scale and grow their business through the Digital Ocean
more efficiently.
 IaaS can manage big data to handle large workloads and integrate with BI
tools.
 GCE (Google Compute Engine) is an IaaS component that runs search
engine, Gmail and other services.
CHAPTER 4: RESOURCE GROUPS

A resource group in Azure is the next level down the hierarchy. At this level,
administrators can create logical groups of resources—such as VMs, storage
volumes, IP addresses, network interfaces, etc.—by assigning them to an Azure
resource group. Resource groups make it easier to apply access controls,
monitor activity, and track the costs related to specific workloads. You decide
how you want to allocate resources to Azure resource groups based on what
makes the most sense for your organization, but as a best practice, we suggest
assigning resources with a similar lifecycle to the same Azure resource group,
so you can easily deploy, update, and delete them as a group.

The resource group collects metadata from each individual resource to facilitate
more granular management than at the subscription level. This not only has
advantages for administration and cost management, but also for applying role-
based access controls.

The underlying technology that powers resource groups is the Azure Resource
Manager (ARM). ARM was built by Microsoft in response to the shortcomings
of the old Azure Service Manager (ASM) technology that powered the old
Azure management portal. With ASM, users could create resources in an
unstructured manner, leading to many challenges in tracking such resources or
understanding their dependencies. This led to huge difficulties that discouraged
people from using Azure, as they would have a big mess once they had to deal
with multiple applications that spanned more than one region.

To better understand this challenge, imagine you wanted to create an application


in the old Azure management portal. To do so, you would create the virtual
networks, the storage account(s), the cloud services, virtual machines, and
potentially many other components of that application—without the ability to
group them. If you were to build or deploy more than one application, it would
become difficult to find which resources depended on which, what order to
create them by, or even which application they belonged to.

How To Use Azure Resource Groups?

Azure Resource Manager, which was announced in 2014 and became generally
available in 2017, addresses this challenge and others by providing a new set of
application programming interfaces (APIs) that are used to provision resources
in Azure. ARM requires that resources be placed in resource groups, which
allows for logical grouping of related resources.

In the figure below, two resource groups are used for grouping: First, those that
are related to a line-of-business(LOB) application, and second, to an
infrastructure as a service (IaaS) workload.
Although creating a resource group requires specifying a region for it to be
stored in, the resources in that resource group could span multiple regions. The
requirement to have a region for a resource group comes from the need to store
the deployment metadata (definitions) associated with it in a specific location
and does not dictate that resources belonging to it need to be in the same region.

Benefits of Azure Resource Manager (ARM)

Before ARM, you had to provision resources independently and you had to
have a good understanding of their dependencies and accommodate for them in
deployment scripts. As an example, to create a virtual machine, you needed to
create a storage account, a virtual network, a subnet, etc. first.

On the other hand, ARM can figure out the dependencies of resources that need
to be provisioned before creating the virtual machine and what order they need
to be provisioned in, saving the user from having to repeat their work to fix
unnecessary errors during deployment. In the example above, ARM will
automatically create the virtual network and storage account simultaneously
with the virtual machine. The portal blades walk the user through defining the
related resources as part of provisioning the virtual network. As an example, in
the screenshot below, you can see that creating a virtual machine requires
specifying the other dependencies in the settings blade, including the virtual
network, the subnet, the public IP address, and storage account, among other
things.
ARM provides the ability to provision resources declaratively using JavaScript
Object Notation (JSON) documents. The JSON document may include a
description of multiple resources that need to be provisioned in a resource
group, and ARM knows how to provision them accordingly. This provides an
added flexibility and ease in managing resources belonging to resource groups.
Using JSON in this manner allowed for creating resource templates that would
make it much faster to provision resources belonging to a resource group. This
also allowed for third party providers to make hundreds of templates available
to provision different resources that correspond to many deployment scenarios.
Those templates are available from code repositories such as GitHub or
the Azure Marketplace.
In the ARM architecture, resource groups not only become units of deployment,
but also units of management of related resources. This allows users to
determine the cost associated with the whole resource group, making
accounting and chargebacks more manageable. It also allows role-based access
control (RBAC) at the resource group level, making it much easier to manage
user access to the resources in the group. When users log into the Azure Portal,
they will only see resource groups they have access to and not others within the
subscription. Administrators will still be able to assign access control for users
to individual resources within the resource group based on their roles.
Management of Azure Resource Group

Aside from scripting (e.g. using PowerShell or the Azure CLI 2.0,) resource
groups can only be managed in the new Azure portal that became generally
available last year. If using the new Azure Portal, a resource group item is
available in the navigation menu by default and can be used to open the RG
management “blade,” as you can see in the screenshot below.

The
RG management blade provides a straightforward way to create and manage
resource groups. It also provides a flexible customizable, high-level view of
available resource groups in a specific Azure subscription. The user can select
what columns to see in this view based on their role and interests and may use
filters to zoom in on resource groups specific to a subscription or a location.
The new portal was designed to work well with the ARM concepts and
architecture providing great flexibility and user experience in how resources are
displayed and managed. The portal displays blades to the user as additional
resources are created. A blade is a self-contained page (or set of pages) that
allows the user to view and manage all aspects of the resource they have created
using a step-by-step wizard-like approach for building the resource.

Keys to a Successful Azure Resource Groups

Despite the advantages resource groups and ARM bring to Azure users, it is
important to use them with care and good insight. The key to having a
successful design of resource groups is understanding the lifecycle of the
resources that are included in them. For instance, if an application requires
different resources that need to be updated together, such as having a SQL
database, a web app, a mobile app, etc., then it makes sense to group these
resource in the same resource group. It is important, however, to use different
resource groups for dev/test, staging, or production, as the resources in these
groups have different lifecycles.

Conclusion

In this guide on the Azure Resource Manager, we explained why Microsoft


built the ARM architecture and the concept of resource groups at a high-level. It
is this architectural change that allowed for easier and more scalable use of
Azure and put it on the growth path it is enjoying now. If you have any
questions or would like to learn more about how to use Azure Resource Groups,
please contact us or learn more about what managed Microsoft Azure can do for
you.
CHAPTER 5: CLOUD NETWORKING
A virtual network allows companies and individuals to create a network that
exists between computers and servers, in spite of local differences. This allows
for many benefits from remote access capabilities to making it easier to
troubleshoot and fix issues. In this article, we’ll discuss virtual networking and
the role it plays in business.

What is Virtual Networking?

A virtual network is a network where all devices, servers, virtual machines, and
data centers that are connected are done so through software and wireless
technology. This allows the reach of the network to be expanded as far as it
needs to for peak efficiency, in addition to numerous other benefits.
A local area network, or LAN, is a kind of wired network that can usually only
reach within the domain of a single building. A wide area network, or WAN, is
another kind of wired network, but the computers and devices connected to the
network can stretch over a half-mile in some cases.

Conversely, a virtual network doesn’t follow the conventional rules of


networking because it isn’t wired at all. Therefore, all devices that interact with
each other in the network do so through internet technology, allowing them to
have a further reach than they would have if they were wired. The network itself
is as limitless as the internet. Like many of the services we hear about in the
cloud when a service provider offers third party networking services to
companies, this is sometimes referred to as Networking-as-a-Service or NaaS.
Virtual Network: How it Works

A virtual network uses modern technology to create an extended network that


works wirelessly. This includes:

 vSwitch Software: Virtualization software on host servers that allows you


to set up and configure a virtual network.
 Virtual network adapter: Creates a gateway between networks.
 Physical network: Required as a host for the virtual network infrastructure.
 Virtual machines and devices: Instruments that connect to the network
and allow various functionality.
 Servers: Part of the network host infrastructure.
 Firewalls and security: Designed for monitoring and stopping security
threats.

There are three classes of virtual networks, VPN, VLAN, and VXLAN:

VPN

VPN stands for virtual private network. Essentially, a VPN uses the internet to
connect two or more existing networks. This internet-based virtual network
allows users to log in from anywhere to access the physical networks that are
connected. VPNs are also used for masking internet use on public WiFi and
ensuring secure browsing. A VPN is created when data attached to packets
defines routing information that takes users to the applicable address. In doing
this, a tunnel of addresses is created, encrypting the browsing history and
making it possible to access information remotely. VPNs provided a small-
scope, fully virtual network that uses the internet to allow for people to connect.
VLAN

A virtual LAN network, or VLAN, uses partitions to group devices on a LAN


network into domains with resources and configurations that are applied to
each. Using a VLAN allows for better security, monitoring, and management of
the devices and servers within a specific domain. This is especially true for
large networks that may be more vulnerable to attack when domains are not
used and monitored individually.

VXLAN

VXLAN means virtual extensible local area network. In this network, your level
3 network infrastructure provides a tunnel into level 2. Virtual switches create
endpoints for each tunnel, and another piece of technology, called a physical or
virtual base case, can route data between endpoints.

Benefits of Virtual Networking

There are many benefits to virtual networking, these include:

 Remote work capabilities: Virtual networking allows people to access


their networks from anywhere in the world.
 Digital security: By using virtual networking, you can make your networks
more secure through the application of features, like tunneling encryption
and domain segments.
 Streamlines hardware: By using vSwitches to route functions from one
place to another, enterprise businesses can reduce the amount of hardware
they need to access, maintain, and monitor.
 Flexibility and scalability: Because it’s virtual and not much hardware is
required to create a virtual network, it’s easier to scale at a lower cost of
ownership. Scaling takes a few tweaks to the software and configurations
but does not necessarily require a lot of equipment.
 Cost savings: By reducing hardware, businesses benefit by saving money
on hardware costs and maintenance.
 Productivity: Because networks can be configured more quickly

Virtual Networking and Contemporary Business

In a changing world, virtual networking plays an important role in any digital


business model. It’s an evolution of technology that addresses the need for
remote accessibility, security, flexibility, scalability, and cost savings. Like
many services that enterprise businesses can outsource, doing so has benefits in
terms of time, money, and valuable resources that can be better spent ensuring
all of your technology is meeting your business needs.

As societal conditions require that more people work remotely, virtual


networking and NaaS services will continue to become more and more essential
to all businesses. Increasing virtual networking capabilities may be the next
phase in digital transformation for businesses that have already undergone the
process of becoming a digital enterprise. For instance, expanding your
company’s virtual network to include more than a simple VPN for the added
boost in productivity is one way businesses can continue to evolve in the digital
world.
What is IP Address

IP Address” is the short way to address the term “Internet Protocol Address” IP
Addresses refer to a number scheme or the way of providing a unique number to
every computer or device that connected to the internet. VINT CERF “the father
of the internet” was considered to play a vital role in creating IP Addresses
when he used to work for DARPA. The most important features of an IP
Address are:
• Unique.
• Globalized and Standardized.
• Essential.
In simple terms, it can be explained as the personal address of the device that is
distinctive and specially created for that device. No two devices on the internet
can have the same IP Address. For our convenience, we use the names to find
things on the internet like if we need to look for Punjab University Chandigarh
on the internet, we merely write www.puchd.in but on the machine end, this
address is converted in some numerical address so that we can send data to the
right location. These IP Addresses are the part of the NETWORK LAYER of
the OSI Model, whose primary function is to navigate data between the source
and the destination.

IPV4 Version

The First Version IPv4 is the most widely used Internet Protocol. IPv4 addresses
are written in the form of a string which consists of 4 numbers with a 3 digit
section which lies between the ranges of 0-255. Each number separated by a dot.
Each section can be represented in binary form with each section having 8 bits.
An IP Address can be written in any form, i.e., binary, octal, and hexadecimal if
required. The IPv4 is of size 32-bit storage of maximum that means we can store
at max (232) addresses. IPv4 has around 4 billion unique IP addresses. Even out
of these addresses some addresses are kept reserved for exclusive use under the
category of Private Networks and Multicasting Addresses. A typical IPv4
address looks like as follows:
IPAddress:192.168.90.1
Binary notation: 11000000 . 10101000 . 01011010 . 00000001

IPv4 Address Classes

IPv4 class is a way of division of addresses in the IPv4 based routing. Separate
IP classes are used for different types of networks. They can be explained as
follows

CLASSES Range

Class A 1.0.0.0 to 127.255.255.255

Class B 128.0.0.0 to 191.255.255.255

Class C 192.0.0.0 to 223.255.255.255

Class D 224.0.0.0 to 239.255.255.255

Class E 240.0.0.0 to 255.255.255.255

IPv6 Version

IPv6 Addresses were written using hexadecimal so that they can fit
more information using lesser digits. The typical IPv6 address was a long string
of numbers in comparison to IPv4. IPv6 uses 128 binary bits to create a single
address; the IP address expressed by 8 groups of hexadecimal numbers. Here we
used a colon instead of dots to separate the sections of digits. Here if we find 2
colon side by side, that means that all sections between them contain only 0’s.
Let’s see the example of address with and without colons below:
With double colon ->2001:0db7::54
Without double colon -> 2001:0db7:0000:0000:0000:0000:0000:0054

Subnetting

Subnetting refers to the concept of dividing the single vast network into more
than one smaller logical sub-networks called as subnets. Sub net is related to IP
Address as it borrows a bit from the host part of the IP Address. Thus the IP
The subnet is formed by taking the last bit from the network component of the
IP address and used to specify the number of subnets required. Subnetting allows
having various sub-networks within the big network without having a new
network number through IPS. Subnetting reduces network traffic and
complexity. The purpose of introducing the concept of Subnetting was to fulfill
the shortage of IP Addresses. The Subnetting process helps in dividing the class
A, class B, and class C network numbers into smaller parts. A subnet can further
be broken down into smaller networks known as sub-subnets.

IP address Assignment

An IP Address is provided to us by our ISP, i.e., internet service provider. This


address can be of two types:

1.Static.Address.
2. Dynamic Address.

If we need to set up a web server or an email service, then we need to use a Static
IP Address. Whereas if we want to surf the internet, we need a Dynamic IP
Address.

Static IP Address

A static address is also known as a fixed address which means the system with
static address have the same address when it is connected over the internet too.
These addresses are excellent in terms for those who perform activities related
to web hosting, games, voice over internet protocol, etc., These addresses are
generally used by persons using commercial lease lines or the public
organizations who need same IP address every time.

Dynamic IP Address

The dynamic internet protocol address or in short dynamic IP address is a


temporary address assigned to our computing device when it connected to the
network, the dynamic address automatically assigned by our IPS. Every time our
computer or router reboot the IPS assigns a Dynamic IP address to our
networking device using DHCP protocol. We can check whether we are using a
Dynamic IP address or Static IP address by just checking, what the status of
DHCP is. If DHCP enables set to YES, that means we are using a Dynamic
Address, and if the DHCP enable set to NO, then that means we are using a Static
Address. The dynamic address is assigned using Dynamic Host Configuration
Protocol (DHCP) that the part of the TCP/IP Suite. The address assigned by the
DHCP has an expiration period after which the address can be given to some
other device if required, thus helping devices to share limited address space on
the network.

Public IP Address

The public IP address is the unique address given to all computers attached to
the network. No two machines on the network can have the same IP address.
Using these addresses machines can exchange information between each other
and can communicate with one another over the network. The user has no control
over the Public IP address as it is provided to him by the ISP whenever the
machine connected to the internet. A public address can be of any nature, i.e.,
static or dynamic. It depends upon the need and requirements of the user. Mostly
the users have the dynamic type of Public IP address.
Private IP Address

The organizations (IANA) that distribute the IP addresses for use have kept a
range of addresses as private addresses for the private network. Private addresses
are the addresses that are used by private networks like home or office networks.
Here the logic is that these addresses are used within single administration and
not on the global network or the internet. The range of addresses set aside for
private networks is as follows:

• 192.168.0.0 – 192.168.255.255 (total 65,536 IP addresses)


• 172.16.0.0 – 172.31.255.255 (total 1,048,576 IP addresses)
• 10.0.0.0 – 10.255.255.255 (total 16,777,216 IP addresses)

The device within a private network cannot be connected to the internet directly.
If the computer within private network can connect to internet or another
network, then that means those computers have both a public IP address as well
as a private IP address private IP address to communicate within the network
and public IP address to communicate over the internet. If we want to
communicate with another private network, then this could be achieved by using
a router or a similar device like Network Address Translation (NAT). We can
see our computer’s private IP address by using the command ipconfig IPV4
Address on the window command prompt. Mostly the private IP addresses are
of Static nature.

IP Address Name Resolution: Domain Name vs. IP Address

An IP address is a logical address that is used to find a particular link on the


network. This IP address is generally in the form of numbers as in IPv6 we use
complex hexadecimal notions for an IP address. To connect to some network
service or even a local network we need an IP address ever time but remembering
the long, tedious numbers is not an easy task. As its human nature that we tend
to remember names more easily the numbers that why we use Domain names
which act as an ALIAS. A domain address is a user-friendly textual address
which can be converted into its respective IP address by using a Domain Name
System server (DNS). The best example is that of our phone book where the
name of the person is a domain name and its phone number is the IP address
CHAPTER 6:VM’S

What is a virtual machine?

A Virtual Machine (VM) is a compute resource that uses software instead of a


physical computer to run programs and deploy apps. One or
more virtual “guest” machines run on a physical “host” machine. Each virtual
machine runs its own operating system and functions separately from the other
VMs, even when they are all running on the same host. This means that, for
example, a virtual MacOS virtual machine can run on a physical PC.

Virtual Machine can be defined as an emulation of the computer systems in


computing. Virtual Machine is based on computer architectures. It also gives
the functionality of physical computers. The implementation of VM may
consider specialized software, hardware, or a combination of both.

History of Virtual Machine


 System Virtual Machines are notably implemented within the CTSS
(Compatible Time-Sharing System). Time-Sharing permitted more than
one user for using the computer concurrently. All the programs displayed
to have complete access to a machine, but a single program can only run at
a time. It was derived into virtual machines by the research system of IBM
notably. The 44X/M44 are using partial virtualization, and SIMMON and
CP-40 are using full virtualization. These are some examples of
hypervisors.
 The first architecture of the virtual machine was CMS/CP-67. An
important differentiation was among using many virtual machines on a
single host for time-sharing as within the CP-40 and 44X/M44.
 Emulators date turn to the 360/IBM System in 1963 with hardware
emulation of previous systems for compatibility.
 Originally, process virtual machines developed as abstract environments
for any intermediate language applied as a program's intermediate
representation by the compilers. The O-code machine was an example of
early 1966. The O-code machine was a VM that runs object code (O-code)
expanded by the BCPL compiler's front end. This abstraction permitted the
compilers to be ported to any new architecture easily.
 The Euler language applied the same design with an intermediate language
called portable (P). It was promoted by Pascal in 1970, notably within the
Pascal-P system and Pascal-S compiler. They were known as p-code and
the p-code machine as the resulting machine.
 It has been affecting, and VMs within this type of sense have been
generally known as p-code machines often. In addition, Pascal p-code was
run by an interpreter directly which is used for implementing VM.
 Another example is SNOBOL (1967). It was specified in SIL (SNOBOL
Implementation Language) that is an assembly language for VM. It was
intended to physical machines through transpiling to the native assembler
by a macro assembler.
 Process VM was a famous approach for implementing microcomputer
software, containing adventure and Tiny BASIC games. It can be done by
some Implementations like Pyramid 2000 to any general-purpose engine
such as z-machine of Infocom.
 Significant advances illustrated in the Smalltalk-80 implementation
(specifically the Schiffmann/Deutsch Implementations). They can push
forward the JIT (Just In Time) compilation as the implementation approach
which applies process VM. Notably, later Smalltalk virtual machines were
Strongtalk, Squeak Virtual Machine, and VisualWorks.
 A complimentary language generated many VM innovation which
pioneered generational garbage collection and adaptive optimization.
Commercially, these methods were approved successfully within the
HostSpot Java virtual machine in 1999.
 Other innovations contain the register-based VM to match various
underlying hardware, instead of a stack-based VM that is closer for any
programming language. It was pioneered in 1995 for the Limbo language
by Dis VM. OpenJ9 is a substitute for HotSpot Java virtual machine inside
the OpenJDK. Also, it is an open-source project requesting good startup
and fewer resource consumption when compared to HotSpot.

Types of Virtual Machine

System virtual machines: These types of virtual machines are also termed
as full virtualization VMs. It facilitates a replacement for an actual
machine. These VMs offers the functionality required for executing the
whole operating system (OS). A hypervisor applies native execution for
managing and sharing hardware. It permits for more than one environment
that is separated from each other while exists on a similar physical
machine. Novel hypervisor applies virtualization-specific hardware and
hardware-assisted virtualization from various host CPUs primarily.
o Process virtual machines: These Virtual Machines are created for
executing several programs of the computer within the platform-
independent environment.
A few VMs are developed for emulating distinct architectures like QEMU. It
permits the execution of operating system and software applications written for
other architectures or CPU. Operating-system-level virtualization permits the
computer resources to be categorized by the kernel.

What is System Virtual Machines?

Originally, a Virtual Machine was described by Goldberg and Popek as "an


isolated and efficient duplicate of an actual computer machine." The latest
use combines virtual machines that haven't any direct relation with actual
hardware. Generally, the real world or physical hardware (executing the virtual
machine) is termed as the "host" and the VM copied on the machine is generally
termed as the "guest."

Working of System Virtual Machines

The host could emulate various guests, all of which could emulate distinct
hardware platforms and operating systems.

A craving to execute more than one operating system was a starting objective of
the virtual machines. It allows time-sharing between many individual tasking
operating systems. A system VM can be could be considered the concept
generalization of virtual memory that preceded it historically.

CMS/CP of IBM, the initial systems that permit full virtualization, equipped to
be sharing by giving all users an individual-user OS (Operating System). The
system VM designated the user for writing privileged instructions inside the code.
This type of method has some advantages like including output/input devices not
permitted by any standard system.

Memory over-commitment's new systems may be used for managing memory


sharing between several VMs over a single computer OS. It is because technology
expands VM for various virtualization purposes. It can be possible to distribute
memory pages that include identical contents for many VMs that execute on a
similar physical machine. As a result, mapping them to a similar physical page
by a method called KSM (kernel-same page merging).

It is useful especially for various read-only pages, like those containing code
segments. It is a case for more than one VM executing the similar or
same middleware components, web servers, software libraries, software, etc.
A guest OS doesn't require to be compliant with any host hardware, hence making
it feasible to execute distinct OS on a similar computer (such as an operating
system's prior version, Linux, or Windows) for supporting future software.

Uses of System Virtual Machines

The virtual machine can be used for supporting isolated guest OS. It is popular
regarding embedded systems. A common use might be to execute the real-time
operating system with a preferred complicated operating system simultaneously
such as Windows or Linux.

Other uses might be for unproven and novel software that is still in the stage of
development, thus it executes in a sandbox. VMs have other aspects of OS
development. It may contain faster reboots and developed debugging access.
More than one virtual machine running their guest OS is engaged for the
consolidation of the server frequently.

What is Process Virtual Machines?

A process virtual machine is sometimes known as MRE (Manages Runtime


Environment) or application virtual machine. It runs as a general application in
the host operating system and supports an individual process. These are created
if that process begins and destroyed if it exits.
The purpose of the process VM is to facilitate a programming environment that
is platform-independent. It abstracts away all the information of the underlying
operating system or hardware. It allows the programs to be executed on any
platform in a similar way.

A process virtual machine gives the high-level abstraction of a high-level


programming language. Process virtual machine can be implemented with the use
of an interpreter. Its performance proportionate to the programming language
(compiled) can be attained by using a just-in-time compilation.

The process virtual machine has become famous with the Java
programming language. It can be implemented with the Java virtual machine.
Another example includes the .NET Framework and Parrot virtual
machine which executes on the virtual machine known as the Common
Language Runtime. Each of them could be served as the abstraction layer for a
computer language.

The process virtual machine has a special case for those systems that essence on
the communication mechanisms of the (heterogeneous potentially) computer
clusters. These types of virtual machines do not include any individual process,
although one process/physical machine inside the cluster.

These clusters are created to mitigate the programming confluent applications


task by enabling the programmers to concentrate on algorithms instead of the
communication mechanisms given by the OS and interconnect.

They don't hide a fact that communication takes place and attempt to illustrate a
cluster as an individual machine.

This system doesn't give a particular programming language, unlike other types
of process virtual machines, although, they are embedded within any existing
language. Such any system typically facilitates binding for many languages (like
FORTRAN and C).

Examples are MPI (Message Passing Interface) and PVM (Parallel Virtual
Machine). They are not virtual machines strictly because various applications
executing on the top still contain access to every OS service. Thus, they are not
restricted to the model of the system.

Full Virtualization

The virtual machine affects hardware to permit a guest operating system to be


executed in separation in full virtualization. It was developed in 1966 using the
IBM CP-67 and CP-40 which are the VM family's predecessors.

Some of the examples outside the field of mainframe include Egenera vBlade
technology, Win4Lin Pro, Win4BSD, Mac-on Linux, Adeos, QEMU, VMware
ESXi, VMware Server (also known as GSX Server), VMware Workstation,
Hyper-V, Virtual Server, Virtual PC, Oracle VM, Virtual Iron, VirtualBox,
Parallels Desktop for Mac, and Parallels Workstation.

Hardware-assisted virtualization

The hardware facilitates architectural support in hardware-assisted virtualization.


This architectural support provides help for creating a monitor of the virtual
machine and permits various guest operating systems to be executed in
separation.

This type of virtualization was first defined in 1972 on the IBM System/370. It
was introduced for applying with VM/370. The initial virtual machine OS
provided by IBM was the official product.
AMD and Intel give additional hardware for supporting virtualization in 2006 and
2005. In 2005, Sun Microsystems (Oracle Corporation) have included similar
aspects in the UltraSPARC T-Series processors. Virtualization platform's
examples adapted to some hardware include Parallels Workstation, VirtualBox,
Oracle VM Server for SPARC, Parallels Desktop for Mac, Xen, Windows Virtual
PC, Hyper-V, VMware Fusion, VMware Workstations, and KVM.

First-generation 64-bit and 32-bit x86 hardware support have been detected to
facilitate performance benefits on software virtualization in 2006.

Operating-system-level virtualization

A physical server can be virtualized on the OS level in operating-system-level


virtualization. It allows more than one secure and isolated virtualized server for
running on an individual physical server.

The environment of the guest operating system shares a similar running instance
of an operating system as any host system. Hence, a similar operating system
kernel is used for implementing guest environments. Also, various applications
that are running within the provided guest environment consider it as the stand-
alone system.

The original implementation was FreeBSD jails. Another example includes iCore
Virtual Accounts, Parallels Virtuozzo Containers, AIX Workload Partitions,
LXC, Linux-Vserver, OpenVZ, Solaris Containers, and Dockers.

Full virtualization can be possible with the accurate combination of software and
hardware elements only. For example, full virtualization is not possible using
most of the System/360 series of IBM and early System/360 system of IBM.
In 1972, IBM included virtual memory hardware to the series of System/370
which is not similar to the Intel VT-x Rings. It facilitates a higher-level of
privilege for the hypervisor to handle virtual machines properly.

Challenges for full virtualization

Full virtualization's primary challenge is the simulation and interception of


various privileged operations like I/O instructions. The consequence of all
operations implemented in a provided VM should be kept inside that VM.
Virtual operations can't be permitted to change any other VM state, hardware,
and the control program.

A few machine instructions could be run via the hardware directly since all the
effects are contained entirely in the components which are handled by the control
programs like arithmetic registers and memory locations.

Although, other instructions (that can pierce the VM) can't be permitted to run
directly. They should rather be simulated and trapped. These types of instructions
either affect or access the state data that is external to the VM.
Full virtualization is highly successful for some of the following reasons:

o Separating users from one other (or from the control program)
o Distribute a single computer system between more than one user
o Imitating new hardware for achieving improved productivity, security, and
reliability.

Advantages of VM
o Virtual Machine facilitates compatibility of the software to that software
which is executing on it. Hence, each software specified for a virtualized
host would also execute on the VM.
o It offers isolation among distinct types of processors and OSes. Hence, the
processor OS executing on a single virtual machine can't change the host
of any other host systems and virtual machines.
o Virtual Machine facilitates encapsulation. Various software present over
the VM could be controlled and modified.
o Virtual machines give several features such as the addition of new
operating system. An error in a single operating system will not affect any
other operating system available on the host. It offers the transfer of many
files between VMs, and no dual booting for the multi-OS host.
o VM provides better management of software because VM can execute a
complete stack of software of the run legacy operating system, host
machine, etc.
o It can be possible to distribute hardware resources to software stacks
independently. The VM could be transferred to distinct computers for
balancing the load.

Azure Virtual Machine Scale Set & Auto Scaling

Virtual Machine scale sets

The scale sets are Azure compute resources that can be used to deploy and
manage identical VMs. They are designed to support virtual machine auto-
scaling. VM scale sets can be created using the Azure portal, JSON templates,
and REST APIs. To increase or decrease the number of VMs in the scale set, we
can change the capacity property and redeploy the template. A virtual machine
scale set is created inside VNET, and individual VMs in the scale set are not
allocated with public IP addresses.

Any virtual machine that we deploy and is the part of the virtual machine scale
set will not be allocated with a public IP address. Because sometimes, the virtual
machine scale set will have a front end balancer that will manage the load, and
that will have a public IP address. We can use that public IP address and connect
to underlying virtual machines in the virtual machine scale set.

Virtual Machine Auto Scaling

Autoscale enables us to dynamically allocate or remove resources based on the


load on the services. You can specify the maximum and the minimum number of
instances to run and add or remove VMs based on a set of rules within the range.

The first step in auto-scaling is to select a metric or time. So, it can be a metric
based auto-scaling, or it can be a schedule based auto-scaling. The metrics can be
CPU utilization, etc., and the time can be like the night at 6 o'clock till morning
6:00, we want to reduce no of servers. We can have a schedule based auto-scaling.
In case if we're going to reach according to load, then we can use metric based
auto-scaling.

The next step in the auto-scaling is to define a rule with the condition. For
example - if the CPU utilization is higher than 80 percent, then spin off a new
instance. And once the condition is met, we can carry some actions. The actions
can be adding or removing virtual machines, or it can be sending email to a system
administrator, etc. We need to select whether it is a time-based auto-scaling or
metric-based, and we need to choose the metric. We define the rule and actions
that need to be triggered when the condition in that rule is satisfied.

Horizontal and Vertical scaling

o Horizontal scaling: The increasing or decreasing the number of VM


instances. It auto-scales horizontally and sometimes called as Scale-out or
Scale in scaling.
o Vertical scaling: In this, we keep the same numbers of VMs but make VM
more or less powerful. Power is measured as memory, CPU speed, disk
space, etc. It is limited by the availability of larger hardware within the
same region and usually requires a VM to start and stop. This is sometimes
called Scale up or scale downscaling. Below are the steps to achieve
vertical scaling.
i. Setup Azure automation account
ii. Import the Azure Automation Vertical scale runbooks into our
subscriptions.
iii. Add a webhook to our network.
iv. Add an alert to our Virtual Machine.
o We can also scale web apps and cloud services.

Metrics for Autoscaling

o Compute metrics: The available metrics will depend upon the installed
operating system. For windows, we can have a processor, memory, and
logical disk metrics. For Linux, we can have processor, memory, physical
& network interface metrics.
o Web Apps metrics: It includes CPU & memory percentage, Disk & HTTP
queue length, and bytes received/sent.
o Storage/ Service bus metrics: We can scale by Storage queue length,
which is the number of messages in the storage queue. Storage queue
length is a particular metric, and the threshold applied will be the number
of messages per instance.

Tools to implement Auto Scale

o We can use the Azure portal to create a scale set and enable auto-scaling
based on a metric.
o We can provision and deploy VM scale sets using Resource Manager
Templates.
o ARM templates can be deployed using Azure CLI, PowerShell, REST,
and also directly from Visual Studio.

Scaling Azure Virtual Machine

Step 1: Go to Azure Marketplace and type in the Virtual Machine scale set. Then
click on Create.
Step 2: We need to give a name to this scale set. And fill all the other required
details, as shown in the figure below. Then click on create.
Step 3: Now, your Virtual Machine scale set is successfully deployed. To view
VMSS, you can go to resources.
Step 4: Now, click on scaling. Provide an auto-scale setting name. And select the
resource group.

Step 5: Scroll Down, and you will find two ways to auto-scale. First, click on
"add a rule? for auto-scaling based on the metric. We are going to scale our virtual
machine if the average percentage of CPU utilization is above 70 percent.

Step 6: Now, select the time and date based scaling, where you can scale when
you need more space. And the last thing is Notify, where you get notified
whenever the auto-scaling gets triggered.
CHAPTER 7:CLOUD STORAGES
Azure Storage Account

An Azure Storage Account is a secure account, which provides you access to


services in Azure Storage. The storage account is like an administrative container,
and within that, we can have several services like blobs, files, queues, tables,
disks, etc. And when we create a storage account in Azure, we will get the unique
namespace for our storage resources. That unique namespace forms the part of
the URL. The storage account name should be unique across all existing storage
account name in Azure.

Types of Storage Accounts

Storage Supported Supported Supported Replication Deployment Encryption


account services performance access options model
type tiers tiers

General- Blob, File, Standard, Hot, Cool, LRS, ZRS, Resource Encrypted
purpose V2 Queue, Premium Archive GRS, RA- Manager
Table, and GRS
Disk

General- Blob, File, Standard, N/A LRS, GRS, Resource Encrypted


purpose V1 Queue, Premium RA-GRS Manager,
Table, and Classic
Disk

Blob Blob (block Standard Hot, Cool, LRS, GRS, Resource Encrypted
storage blobs and Archive RA-GRS Manager
append
blobs only)

Types of performance tiers

Standard performance: This tier is backed by magnetic drives and provides low
cost/GB. They are best for applications that are best for bulk storage or
infrequently accessed data.

Premium storage performance: This tier is backed by solid-state drives and


offers consistency and low latency performance. They can only be used with
Azure virtual machine disks, and are best for I/O intensive workload such as the
database.

(So every virtual machine disk will be stored on a storage account. So, if we are
associating a disk, then we will go for the premium storage. But if we are using
storage account specifically to store blobs, then we will go for standard
performance.)

Access Tiers

There are four types of access tiers available:

Premium Storage (preview): It provides high-performance hardware for data that


is accessed frequently.

Hot storage: It is optimized for storing data that is accessed frequently.

Cool Storage: It is optimized for storing data that is infrequently accessed and
stored for at least 30 days.
Archive Storage: It is optimized for storing files that are rarely accessed and
stored for a minimum of 180 days with flexible latency needs (on the order of
hours).

Advantage of Access Tiers:

When a user uploads the document into the storage, the document will initially
be frequently accessed. During that time, we put the document in the hot Storage
tier.

But after some time, once the work on the document is completed. Nobody
generally accesses it. So it will become infrequently accessed document. Then
we can move the document from Hot storage to Cool storage to save the cost
because cool storage is built based on the number of times the document is
accessed. Once the document is matured, i.e., once we stopped working on that
document, the document becomes old. We rarely refer to that document. In that
case, we put it in cool storage.

But for six months or one year, we don't want the document to be referred to in
the future. In that case, we will move that document to archive storage.

So hot storage is costlier than cool storage in terms of storage. But cool storage
is more expensive in terms of access. Archive storage is used for archiving the
documents into storage, which is not accessed.

Azure Storage Replication

Azure Storage Replication is used for the durability of the data. It copies our data
to stay protected from planned and unplanned events, ranging from transient
hardware failure, network or power outages, and massive natural disasters to
man-made vulnerabilities.
Azure creates some copies of our data and stores it at different places. Based on
the replication strategy.

LRS (Local Redundant Storage): So, if we go with the local-redundant storage,


the data will be stored within the data center. If the data center or the region goes
down, the data will be lost.

ZRS (Zone-Redundant Storage): The data will be replicated across data centers
but within the region. In that case, the data is always available within the data
center, even if one node is not available. OR we can say that the data will be
available also if the entire data center goes down because the data is already
copied in another data center within the region. However, if the region itself is
gone, then you will not get the data access.

GRS (global-redundant storage): To protect our data against region-wide


failures. We can go for global-redundant storage. In this case, the data will be
replicated in the paired region within the geography. And in case if we want to
have read-only access to the data that is copied to another region, then, in that
case, we can go for RA-GRS (Read Access global-redundant storage). We can
get different things in terms of durability, as we can see in this table below.

Storage account endpoints

Whenever we create a storage account, we will get an endpoint to access the data
within the storage account. So each object that we stored in Azurestorage has an
address, which includes your unique account name and the combination of an
account name, and service endpoint, which forms the endpoint for your storage
account.

For example, if your general-purpose account name is mystorageaccount then


generally the default endpoints for different services looks like:
Azure Blob storage: http://mystorageaccount.blob.core.windows.net.

Azure Table storage: http://mystorageaccount.table.core.windows.net

Azure Queues storage: http://mystorageaccount.queue.core.windows.net

Azure files: http://mystorageaccount.file.core.windows.net

In case if we want to map our custom domain for these, we can still do that. We
can use our custom domain in reference to these storage service endpoints.

Creating and configuring Azure Storage Account

Let's see how to create a storage account in Azure portal and discuss some of the
important settings associated with the storage account:

Step 1: Login to your Azure portal home screen and click on "Create a resource".
Then type-in storage account in the search box and then click on "Storage
account".
Step 2: Click on create, you will be redirected to Create a storage account
window.
Step 3: First, you need to select the subscription whenever you are creating any
resource in Azure, and secondly, you need to choose a Resource Group. In our
case, the subscription is "Free Trail".

Use your existing resource group or create a new one. Here we are going to create
a new resource group.
Step 4: Then, fill the storage account name, and it should be all lowercase and
should be unique across all over Azure. Then select your location, performance
tier, Account kind, Replication strategy, Access Tier, and then click on next.
Step 5: You are now on the Networking window. Here, you need to select the
connectivity method, then click next.

Step 6: You are now on the Advanced window were you need to enable or disable
security, Azure files, Data Protection, Data lake Storage and then click next.
Step 7: Now, you are redirected to the Tags window, where you can provide tags
to classify your resources into specific buckets. Put the name and value of the tag
and click next.

Step 8: This is the final step where the validation has been passed, and you can
review all the elements that you have provided. Click on create finally.
Now our storage account has been successfully created, and a window will appear
with the message "Your deployment is complete".
Click on "goto resource", then the following window will appear.
You can see all the values that you have selected for different configuration
setting when creating the storage account.

Configuration settings and key functionality of the storage account

Activity Log: We can view an activity log for every resource in Azure. It
provides the record of activities that have been carried out on that particular
resource. It is common for all the resources in Azure.
Access Control: Here, we can delegate access for the storage account to different
users.

Tags: We can assign new tags or modify the existing tags here. We can also
diagnose and solve the problems in case if we have any problems.

Events: We can subscribe to some of the events that are happening within this
storage account, it can be either logic app or function. For example, a blob is
created in a storage account. That event will trigger a logic app with some
metadata of that blob.

Storage explorer: This is where you can explore the data that is residing in your
storage account in terms of blobs, files, queues, and tables. Again there is a
desktop version of this storage Explorer which you can download and connect
also, but this is more of a web version of it.

Access Keys: We can use it to develop applications that will access the data
within the storage account. However, we might not want to give access to this
access key directly. We may wish to create SAS keys. Here, we can generate
specific SAS keys for a limited period, with limited access. Then provide that
SAS signature to our developers. Another way is the access keys. Access key
gives blanket access. So we recommend not to give access of the access keys to
anyone other than the one who created that storage account.

CORS (Cross-Origin Resource Sharing): Here, we can mention the domain


name and what operations are allowed.

Configuration: If we want to change any configuration values, then there are


certain things that we can't change once the storage account is created - for
example, performance type. But we can change the access tier, and secure
transport required or not, the replication strategy, etc.
Encryption: Here, we can specify our own key if we want to encrypt the data
within the storage account. We need to click on the check box, and we can select
a key vault URI where the key is located.

(SAS) Shared access signature: Here, we can generate the SAS keys with the
limited access and for the limited period, and provide that information to
developers who are developing applications using the storage account. SAS is
used to access data that is stored in the storage account.

Firewalls and Virtual network: Here, we can configure the network in such a
way that the connections from certain virtual networks or certain IP address
ranges are allowed to connect to this storage account.

And we can configure advanced threat protection and can make the storage
account compatible to host a static website

Properties: Here we can see the properties related to the storage account

Locks: Here, we can apply locks on the services.

So these are the different settings we can configure, and the rest of the settings
are related to different services within the storage account - for example, blob,
file, table, and queue.
CHAPTER 8: CLOUD APP SERVICE
Azure App Services

The most fundamental building block of Azure App Service is the App Service
plan or App Service environment.

There are two types of hosting environments within App Service. App Service
plan and App Service environment. App Service Environment is a more
sophisticated version of the App Service plan and comes with a lot more
features when compared to the App Service plan. Within these, we can host
several Apps like - web applications, web jobs, batches, APIs, and mobile
backend services that can be consumed from our mobile Front-End.

Other related services are closely related to these apps within the App service
plan. Those related services are a notification hub that we can use to push
notifications into mobile devices. We can use Mobile engagement to carry out
Mobile analytics.

Apart from these related services, there is one more service, which is very
important when it comes to APIs, which is API management. API management
can act as a wrapper around our API apps when we're exposing those APIs to
the outside world. It comes with a lot of features such as throttling, security, and
it will be beneficial if we want to commoditize our APIs and sell it to the
outside world.

To enable communication between apps in the App Service plan and apps
installed on virtual machines within the virtual network. There are two ways we
can do it. One way is to establish Point-to-site VPN between apps in the App
Service plan and virtual network via which the apps can communicate with each
other. And the second way is if we have the App service environment. Because
it will get deployed into a virtual machine by itself, the Apps within that App
Service environment can seamlessly communicate with the apps installed on
virtual machines within the virtual network.

And finally, there are two important things. The first one is security, and the
second one is monitoring to secure and control the App services environment.

App Service plan

An app service plan denotes a set of features and capacity that we can share
across multiple apps in the same subscription and geographical region. A single
or dual app can be configured to run on the same computing resources.

Each App Service plan defines:

o Region (West US, East US, etc.)


o Number of VM instances
o Size of VM instances (Small, Medium, Large)
o Pricing tier
o Shared compute: Free and shared, the two basic tiers, runs an app
over the same Azure VM as other App Service app runs, including
apps of different customers.
o Dedicated compute: Basic, Standard, Premium, and PremiumV2
tiers run apps on a fixed Azure VM.
o Isolated: This tier runs dedicated Azure VMs on dedicated Azure
Virtual Networks, which provides network isolation on top of
computing isolation to your apps.
o Consumption: It is only available to function apps. It scales the
functions dynamically, depending on the workload.

Environment features
o Development frameworks: App Service supports a variety of development
frameworks, including ASP.NET, classic ASP, node.js, PHP, and Python-
all of which run as extensions within IIS.
o File access
o Local drives - Operating system drive (D:\drive), an application
drive and user drive (the C:\ drive)
o Network drives - Each customer's subscription has a reserved
directory structure on a specific UNC share within a data center.
o Network access: The application code can use TCP/IP and UDP based
protocols to make outbound network connections to access Internet
endpoints that expose external services.

Web apps Overview

Azure App Service Web Apps is a service for hosting web applications. The key
feature of App Service Web Apps.

o Multiple language and frameworks


o DevOps optimizations
o Security & Compliance
o Application template
o Visual Studio integration
Creating App Service Plan in Azure Portal

Step 1: Click on create a new resource and search for App Service Plan to
create it.

Step 2: Fill-in all the required details and select the SKU size, as shown in the
figure below. Then click on create.
Step 3: Your app service plan will be created. You can now explore and modify
it as per your requirement.

Azure Web App

Azure Web App service lets us quickly build, deploy, and scale enterprise-grade
web, mobile, and API apps running on any platform. It helps us to meet rigorous
performance, scalability, security, and compliance requirements while using a
fully managed platform to perform infrastructure maintenance.

Creating a Web App and deploying an application into Azure web App from
visual studio

Step 1: Click on create a resource and type in the web app. After that, click on
the web app and then click on Create.
Step 2: You are now on the Web App creation page. Fill-in, all the required
details, then click on review+create.

Step 3: Click on create, then you will be redirected to the following page.

You might also like