You are on page 1of 591

Cloud Computing

CS-1
Faculty Name: Prof. Pradnya Kashikar
BITS Pilani pradnyak@wilp.bits-Pilani.ac.in
IMP Note to Self

2
IMP Note to Students
➢ It is important to know that just login to the session does not
guarantee the attendance.
➢ Once you join the session, continue till the end to consider you
as present in the class.
➢ IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT

3
Introduction to Cloud Computing, services
and deployment models

• Agenda
1. Introduction to Cloud Computing – Origins and
Motivation
2. 3-4-5 rule of Cloud Computing
3. Types of Clouds and Services
4. Cloud Infrastructure and Deployment

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Motivation

Powerful
multi-core 1. Web Scale
processors Problems
General
Explosion of
domain
purpose 2. Web 2.0 and
graphic
applications Social
processors
Networking
Superior
Proliferation 3. Information
software
of devices
methodologies
Explosion
Virtualization 4. Mobile Web
Wider bandwidth leveraging the
for communication powerful
hardware
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Evolution of Web
Explosive growth in applications:
• biomedical informatics, space exploration, business analytics,
• web 2.0 social networking: YouTube, Facebook
Extreme scale content generation: e-science and e-business data deluge
Extraordinary rate of digital content consumption: digital gluttony:
• Apple iPhone, iPad, Amazon Kindle, Android, Windows Phone
Exponential growth in compute capabilities:
• multi-core, storage, bandwidth, virtual machines (virtualization)
Very short cycle of obsolescence in technologies:
• Windows 8, Ubuntu, Mac; Java versions; C → C#; Python
Newer architectures: web services, persistence models, distributed file
systems/repositories (Google, Hadoop), multi-core, wireless and mobile
• Diverse knowledge and skill levels of the workforce
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
7 BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Drivers for the new Platform

http://blogs.technet.com/b/yungchou/archive/2011/03/03/chou-s-theories-of-cloud-computing-the-5-3-2-principle.aspx

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Cloud Summary

• Shared pool of
configurable
computing
resources
• On-demand
network access
• Provisioned by
the Service
Provider

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Cloud Summary…

Cloud computing is an umbrella term used to refer to Internet based


development and services

A number of characteristics define cloud data, applications services


and infrastructure:
Remotely hosted: Services or data are hosted on remote infrastructure.
Ubiquitous: Services or data are available from anywhere.
Commodity model: The result is a utility computing model similar to
traditional that of traditional utilities, like gas and electricity - you pay
for what you would want!
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Cloud Definitions
• Definition from NIST (National Institute of Standards and Technology)
▪ Cloud computing is a model for enabling convenient, on-demand
on-demand
network access
network accesstotoaashared
sharedpool
pool
of of
configurable
configurablecomputing
computing
resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly
rapidlyprovisioned
provisioned and
andreleased
released
with
with
minimal management effort or service provider interaction.
▪ This cloud model promotes availability
availabilityand
andisiscomposed
composedofoffive
five
essential characteristics, three service models, and four
deployment models.
3-4-5 rule of Cloud Computing

NIST specifies 3-4-5 rule of Cloud Computing

3 cloud service models or service types for any cloud platform


4 deployment models
5 essential characteristics of cloud computing infrastructure
▪ On demand
self-service
▪ Broad network
access
▪ Resource
pooling
▪ Rapid elasticity
▪ Measured
service

Characteristics of Cloud
Computing
Cloud Definitions
• Definition from Wikipedia
▪ Cloud computing is Internet-based computing, whereby shared
resources, software, and information are provided to computers
and other devices on demand, like the electricity grid.
▪ Cloud computing is a style of computing in which dynamically
scalable and often virtualized resources are provided as a
service over the Internet.
Cloud Definitions
• Definition from Whatis.com
▪ The name cloud computing was inspired by the cloud symbol that's
often used to represent the Internet in flowcharts and diagrams.
Cloud computing is a general term for anything that involves
delivering hosted services over the Internet.
Cloud Definitions
• Definition from Berkeley
▪ Cloud Computing refers to both the applications delivered as
services over the Internet and the hardware and systems software
in the datacenters that provide those services.
▪ The services themselves have long been referred to as Software as a
Service (SaaS), so we use that term. The datacenter hardware and
software is what we will call a
Cloud.
▪ When a Cloud is made available
in a pay-as-you-go manner to the
public…… The service being sold is
Utility Computing.
Cloud Definitions
• Definition from Buyya
▪ A Cloud is a type of parallel and distributed system consisting of a
collection of interconnected and virtualized computers that are
dynamically provisioned and presented as one or more unified
computing resources based on service-level agreements
established through negotiation between the service provider and
consumers.
Properties and characteristics

WHAT IS CLOUD COMPUTING ?


In Our Humble Opinion
• Cloud computing is a paradigm of computing, a new way of
thinking about IT industry but not any specific technology.
▪ Central ideas
• Utility Computing
• SOA - Service Oriented Architecture
• SLA - Service Level Agreement
▪ Properties and characteristics
• High scalability and elasticity
• High availability and reliability
• High manageability and interoperability
• High accessibility and portability
• High performance and optimization
▪ Enabling techniques
• Hardware virtualization
• Parallelized and distributed computing
• Web service
Properties and Characteristics
Utility
Computing
SOA + SLA
Central Ideas
• Perspective from user :
▪ Users do not care about how the works are done
• Instead, they only concern about what they can get
▪ Users do not care about what the provider actually did
• Instead, they only concern about their quality of service
▪ Users do not want to own the physical infrastructure
• Instead, they only want to pay as many as they used

• What dose user really care ?


▪ They only care about their “Service”
Scalability & Elasticity
Scalability & Elasticity
• What is scalability ?
▪ A desirable property of a system, a network, or a process, which
indicates its ability to either handle growing amounts of work in a
graceful manner or to be readily enlarged.

• What is elasticity ?
▪ The ability to apply a quantifiable methodology that allows for the
basis of an adaptive introspection with in a real time infrastructure.

• But how to achieve these properties ?


▪ Dynamic provisioning
▪ Multi-tenant design
Dynamic Provisioning
• What is dynamic provisioning ?
▪ Dynamic Provisioning is a simplified way to explain a complex
networked server computing environment where server
computing instances are provisioned or deployed from a
administrative console or client application by the server
administrator, network administrator, or any other enabled
user.
Multi-tenant Design
• What is multi-tenant design ?
▪ Multi-tenant refers to a principle in software architecture where a
single instance of the software runs on a server, serving multiple
client organizations.
▪ With a multi-tenant architecture, a software application is designed
to virtually partition its data and configuration thus each client
organization works with a customized virtual application instance.

• Client oriented requirements :


▪ Customization
• Multi-tenant applications are typically required to provide a high degree
of customization to support each target organization's needs.
▪ Quality of service
• Multi-tenant applications are expected to provide adequate levels of
security and robustness.
Availability & Reliability
Availability & Reliability
• What is availability ?
▪ The degree to which a system, subsystem, or equipment is in a
specified operable and committable state at the start of a mission,
when the mission is called for at an unknown time.
▪ Cloud system usually require high availability
• Ex. “Five Nines” system would statistically provide 99.999% availability
• What is reliability ?
▪ The ability of a system or component to perform its required
functions under stated conditions for a specified period of time.
• But how to achieve these properties ?
▪ Fault tolerance system
▪ Require system resilience
▪ Reliable system security
Fault Tolerance
• What is fault tolerant system ?
▪ Fault-tolerance is the property that enables a system to continue
operating properly in the event of the failure of some of its
components.
▪ If its operating quality decreases at all, the decrease is proportional
to the severity of the failure, as compared to a naively-designed
system in which even a small failure can cause total breakdown.

• Four basic characteristics :


▪ No single point of failure
▪ Fault detection and isolation to the failing component
▪ Fault containment to prevent propagation of the failure
▪ Availability of reversion modes
Fault Tolerance
• Single Point Of Failure (SPOF)
▪ A part of a system which, if it fails, will stop the
entire system from working.
▪ The assessment of a potentially single location of
failure identifies the critical components of a
complex system that would provoke a total
systems failure in case of malfunction.

• Preventing single point of failure


▪ If a system experiences a failure, it must continue
to operate without interruption during the repair
process.
Fault Tolerance
• Fault Detection and Isolation (FDI)
▪ A subfield of control engineering which concerns itself with
monitoring a system, identifying when a fault has occurred and
pinpoint the type of fault and its location.

• Isolate failing component


▪ When a failure occurs, the system
must be able to isolate the failure
to the offending component.
Fault Tolerance
• Fault Containment
▪ Some failure mechanisms can cause a system to fail by propagating
the failure to the rest of the system.
▪ Mechanisms that isolate a rogue transmitter or failing component
to protect the system are required.

• Available of reversion modes


▪ System should be able to maintain some check points which can be
used in managing the state changes.
System Resilience
• What is resilience ?
▪ Resilience is the ability to provide and maintain an acceptable level
of service in the face of faults and challenges to normal operation.
▪ Resiliency pertains to the system's ability to return to its original
state after encountering trouble. In other words, if a risk event
knocks a system offline, a highly resilient system will return back to
work and function as planned as soon as possible.

• Some risk events


▪ If power is lost at a plant for two days, can our system recover ?
▪ If a key service is lost because a database corruption, can the
business recover ?
System Resilience
• Disaster Recovery
▪ Disaster recovery is the process, policies and procedures related to
preparing for recovery or continuation of technology infrastructure
critical to an organization after a natural or human-induced disaster.

• Some common strategies :


▪ Backup
• Make data off-site at regular interval
• Replicate data to an off-site location
• Replicate whole system
▪ Preparing
• Local mirror systems
• Surge protector
• Uninterruptible Power Supply (UPS)
System Security
• Security issue in Cloud Computing :
▪ Cloud security is an evolving sub-domain of computer security,
network security, and, more broadly, information security.
▪ It refers to a broad set of policies, technologies, and controls
deployed to protect data, applications, and the associated
infrastructure of cloud computing.
System Security
• Important security and privacy issues :
▪ Data Protection
• To be considered protected, data from one customer must be
properly segregated from that of another.
▪ Identity Management
• Every enterprise will have its own identity management system
to control access to information and computing resources.
▪ Application Security
• Cloud providers should ensure that applications available as a
service via the cloud are secure.
▪ Privacy
• Providers ensure that all critical data are masked and that only
authorized users have access to data in its entirety.
Manageability & Interoperability
Manageability & Interoperability

• What is manageability ?
▪ Enterprise-wide administration of cloud computing systems.
Systems manageability is strongly influenced by network
management initiatives in telecommunications.
• What is interoperability ?
▪ Interoperability is a property of a product or system, whose
interfaces are completely understood, to work with other products
or systems, present or future, without any restricted access or
implementation.
• But how to achieve these properties ?
▪ System control automation
▪ System state monitoring
Control Automation
• What is Autonomic Computing ?
▪ Its ultimate aim is to develop computer systems capable of self-
management, to overcome the rapidly growing complexity of
computing systems management, and to reduce the barrier that
complexity poses to further growth.

• Architectural framework :
▪ Composed by Autonomic Components (AC) which will interact
with each other.
▪ An AC can be modeled in terms of two main control loops (local
and global) with sensors (for self-monitoring), effectors (for self-
adjustment), knowledge and planer/adapter for exploiting
policies based on self- and environment awareness.
Control Automation

• Four functional areas :


▪ Self-Configuration
• Automatic configuration of components.
▪ Self-Healing
• Automatic discovery, and correction of faults.
▪ Self-Optimization
• Automatic monitoring and control of resources to ensure the optimal
functioning with respect to the defined requirements.
▪ Self-Protection
• Proactive identification and protection from arbitrary attacks.
System Monitoring

• What is system monitor ?


▪ A System Monitor in systems engineering is a process within a
distributed system for collecting and storing state data.

• What should be monitored in the Cloud ?


▪ Physical and virtual hardware state
▪ Resource performance metrics
▪ Network access patterns
▪ System logs
▪ … etc

• Anything more ?
▪ Billing system
Billing System

• Billing System in Cloud


▪ Users pay as many as they used.
▪ Cloud provider must first determine the list of service usage price.
▪ Cloud provider have to record the resource or service usage of each
user, and then charge users by these records.
• How can cloud provider know users’ usage ?
▪ Get those information by means of monitoring system.
▪ Automatically calculate the total
amount of money which user
should pay. And automatically
request money from use’s banking
account.
Performance & Optimization
Performance & Optimization
• Performance guarantees ??
▪ As the great computing power in cloud, application performance
should be guaranteed.
▪ Cloud providers make use of powerful infrastructure or other
underlining resources to build up a highly performed and highly
optimized environment, and then deliver the complete services to
cloud users.

• But how to achieve this property ?


▪ Parallel computing
▪ Load balancing
▪ Job scheduling
Parallel Processing
• Parallel Processing
▪ Parallel processing is a form of computation in which many
calculations are carried out simultaneously, operating on the
principle that large problems can often be divided into smaller
ones, which are then solved concurrently.

• Parallelism in different levels :


▪ Bit level parallelism
▪ Instruction level parallelism
▪ Data level parallelism
▪ Task level parallelism
Parallel Processing
• Hardware approaches
▪ Multi-core computer
▪ Symmetric multi-processor
▪ General purpose graphic processing unit
▪ Vector processor
▪ Distributed computing
• Cluster computing
• Grid computing
• Software approaches
▪ Parallel programming language
▪ Automatic parallelization
Load Balancing
• What is load balancing ?
▪ Load balancing is a technique to distribute workload evenly across
two or more computers, network links, CPUs, hard drives, or other
resources, in order to get optimal resource utilization, maximize
throughput, minimize response time, and avoid overload.

• Why should be load balanced ?


▪ Improve resource utilization
▪ Improve system performance
▪ Improve energy efficiency
Job Scheduling
• What is job scheduler ?
▪ A job scheduler is a software application that is in charge of
unattended background executions, commonly known for historical
reasons as batch processing.

• What should be scheduled in Cloud ?


▪ Computation intensive tasks
▪ Dynamic growing and shrinking tasks
▪ Tasks with complex processing dependency

• How to approach ?
▪ Use pre-defined workflow
▪ System automatic configuration
Accessibility & Portability
Accessibility & Portability
• What is accessibility ?
▪ Accessibility is a general term used to describe the degree to which
a product, device, service, or environment is accessible by as many
people as possible.

• What is service portability ?


▪ Service portability is the ability to access services using any
devices, anywhere, continuously with mobility support and
dynamic adaptation to resource variations.

• But how to achieve these properties ?


▪ Uniform access
▪ Thin client
Uniform Access
• How do users access cloud services ?
▪ Cloud provider should provide their cloud service by means of
widespread accessing media. In other word, users from different
operating systems or other accessing platforms should be able to
directly be served.
▪ Nowadays, web browser technique is one of the most widespread
platform in almost any intelligent electronic devices. Cloud service
take this into concern, and delivery their services with web-based
interface through the Internet.
Thin Client
• What is thin client ?
▪ Thin client is a computer or a computer program which depends
heavily on some other computer to fulfill its traditional computational
roles. This stands in contrast to the traditional fat client, a computer
designed to take on these roles by itself.
• Characteristics :
▪ Cheap client hardware
• While the cloud providers handle several client sessions at once, the clients
can be made out of much cheaper hardware.
▪ Diversity of end devices
• End user can access cloud service via plenty of various electronic devices,
which include mobile phones and smart TV.
▪ Client simplicity
• Client local system do not need complete operational functionalities.
References
• NIST (National Institute of Standards and Technology).
http://csrc.nist.gov/groups/SNS/cloud-computing/
• M. Armbrust et. al., “Above the Clouds: A Berkeley View of Cloud
Computing,” Technical Report No. UCB/EECS-2009-28,
University of California at Berkeley, 2009.
• R. Buyya et. al., “Cloud computing and emerging IT platforms:
Vision, hype, and reality for delivering computing as the 5th
utility,” Future Generation Computer Systems, 2009.
• Cloud Computing Use Cases.
http://groups.google.com/group/cloud-computing-use-cases
• Cloud Computing Explained.
http://www.andyharjanto.com/2009/11/wanted-cloud-
computing-explained-in.html
• From Wikipedia, the free encyclopedia
• All resources of the materials and pictures were partially
retrieved from the Internet.
53
IMP Note to Self

54
Systems Programming - V 3.0
Cloud Computing
CS -2
Virtualization Techniques and Types

BITS Pilani Faculty Name: Prof. Pradnya Kashikar


pradnyak@wilp.bits-Pilani.ac.in
IMP Note to Self

Cloud Computing 2
IMP Note to Students
➢It is important to know that just login to the session does not
guarantee the attendance.
➢Once you join the session, continue till the end to consider you
as present in the class.
➢IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT

3
Introduction to Virtualisation

• AGENDA
Virtualisation
Introduction to Virtualization
Use & demerits of Virtualization

4 BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Virtualization in the cloud - Transforming a Classic Data
Center (CDC) into a Virtualized Data Center
Virtualized Data Center
Transforming a Classic Data Virtualized Data Center (VDC)
Center (CDC) into a Virtualized
Data Center (VDC) requires
Virtualize Network
virtualizing the core elements of
the data center. Virtualize Storage

Virtualize Compute

Classic Data Center (CDC)

Using a phased approach to a virtualized


infrastructure enables smoother
transition to virtualize core elements.

6
Compute Virtualization
Compute Virtualization

It is a technique of masking or abstracting the physical compute hardware and


enabling multiple operating systems (OSs) to run concurrently on a single or
clustered physical machine(s).

• Enables creation of multiple virtual machines (VMs),


each running an OS and application
• VM is a logical entity that looks and behaves like
physical machine Virtualization Layer
• Virtualization layer resides between hardware and x86 Architecture
VMs
• Also known as hypervisor
• VMs are provided with standardized hardware CPU NIC Card Memory Hard Disk

resources

7
Need for Compute Virtualization

Hypervisor
x86 Architecture
x86 Architecture

CPU NIC Card Memory Hard Disk


CPU NIC Card Memory Hard Disk

Before Virtualization After Virtualization


• Runs single operating system (OS) per machine at a • Runs multiple operating systems (OSs) per machine
time concurrently
• Couples s/w and h/w tightly • Makes OS and applications h/w independent
• May create conflicts when multiple applications run • Isolates VM from each other, hence no conflict
on the same machine • Improves resource utilization
• Underutilizes resources • Offers flexible infrastructure at low cost
• Is inflexible and expensive 8
Hypervisor
Hypervisor

It is a software that allows multiple operating systems (OSs) to run


concurrently on a physical machine and to interact directly with the
physical hardware.

• Has two components


• Kernel
• Virtual Machine Monitor (VMM)
VMM VMM VMM

Hypervisor (Kernel and VMM)


x86 Architecture

CPU NIC Card Memory Hard Disk

9
Types of Hypervisor

APP

Hypervisor
Hypervisor

x86 Architecture Operating System


x86 Architecture

CPU NIC Card Memory Hard Disk


CPU NIC Card Memory Hard Disk

Type 1: Bare-Metal Hypervisor Type 2: Hosted Hypervisor

Type 1: Bare-Metal Hypervisor Type 2: Hosted Hypervisor


• It is an operating system (OS) • It installs and runs as an application
• It installs and runs on x86 bare-metal • It relies on operating system (OS) running on
hardware physical machine for device support and
• It requires certified hardware physical resource management

10
Benefits of Compute Virtualization

• Server consolidation
• Isolation
• Encapsulation
• Hardware independence
• Reduced cost

11
Requirements: x86 Hardware Virtualization

• An operating system (OS) is designed to run on a


bare-metal hardware and to fully own the hardware
• x86 architecture offer four levels of privilege
• Ring 0, 1, 2, and 3 Ring 3 User Apps
• User applications run in Ring 3
• OS run in Ring 0 (most privileged) Ring 2

• Challenges of virtualizing x86 hardware Ring 1


• Requires placing the virtualization layer below the OS layer Ring 0 OS
• Is difficult to capture and translate privileged OS
instructions at runtime X86 Hardware
• Techniques to virtualize compute
• Full, Para, and hardware assisted virtualization

12
Full Virtualization

• Virtual Machine Monitor (VMM) runs in the privileged Ring 0


Ring 3 User Apps
• VMM decouples guest operating system (OS) from the
underlying physical hardware Ring 2
• Each VM is assigned a VMM Ring 1 Guest OS
• Provides virtual components to each VM
Hypervisor
• Performs Binary Translation (BT) of non-virtualizable OS Ring 0
instructions
Physical Machine
• Guest OS is not aware of being virtualized X86 Hardware

13
Paravirtualization

Ring 3 User Apps

• Guest operating system (OS) knows that it is virtualized Ring 2


• Guest OS runs in Ring 0 Ring 1
• Modified guest OS kernel is used, such as Linux and OpenBSD Paravirtualized
Ring 0 Guest OS
• Unmodified guest OS is not supported, such as Microsoft Windows
Hypervisor

Physical Machine
X86 Hardware

14
Hardware Assisted Virtualization

Ring 3 User Apps

• Achieved by using hypervisor-aware CPU to handle privileged Ring 2


instructions Ring 1
• Reduces virtualization overhead caused due to full and
Ring 0 Guest OS
paravirtualization
• CPU and Memory virtualization support is provided in hardware
VMM
• Enabled by AMD-V and Intel VT technologies in the x86 processor
architecture Physical Machine
X86 Hardware

15
Virtual Machine

• From a user’s perspective, a logical compute system


• Runs an operating system (OS) and application like a
physical machine
• Contains virtual components such as CPU, RAM, disk,
and NIC
• From a hypervisor’s perspective Hypervisor

• Virtual machine (VM) is a discrete set of files such as x86 Architecture

configuration file, virtual disk files, virtual BIOS file, VM


swap file, and log file CPU NIC Card Memory Hard Disk

16
Virtual Machine Files
File name Description
Virtual BIOS File • Stores the state of the virtual machine’s (VM’s) BIOS
• Is a VM’s paging file which backs up the VM RAM contents
Virtual Swap File
• The file exists only when VM is running
• Stores the contents of the VM’s disk drive
Virtual Disk File • Appears like a physical disk drive to VM
• VM can have multiple disk drives
• Keeps a log of VM activity
Log File
• Is useful for troubleshooting
• Stores the configuration information chosen during VM creation
Virtual Configuration File • Includes information such as number of CPUs, memory, number and
type of network adaptors, and disk types

17
File System to Manage VM Files

• The file systems supported by hypervisor are Virtual Machine File System (VMFS)
and Network File System (NFS)
• VMFS
• Is a cluster file system that allows multiple physical machines to perform
read/write on the same storage device concurrently
• Is deployed on FC and iSCSI storage apart from local storage
• NFS
• Enables storing VM files on a remote file server (NAS device)
• NFS client is built into hypervisor

18
Virtual Machine Hardware
Parallel Serial/Com USB controller
port ports and USB devices

IDE controllers Floppy controller


and floppy drives

Graphic card Virtual Machine Mouse

RAM Keyboard

VM chipset with one Network adapters


or more CPUs SCSI controllers (NIC and HBA)

19
VM Hardware Components
Virtual Hardware Description

• Virtual machine (VM) can be configured with one or more virtual CPUs
vCPU
• Number of CPUs allocated to a VM can be changed

• Amount of memory presented to the guest operating system (OS)


vRAM
• Memory size can be changed based on requirement
• Stores VM's OS and application data
Virtual Disk
• A VM should have at least one virtual disk
vNIC • Enables a VM to connect to other physical and virtual machines

Virtual DVD/CD-ROM Drive • It maps a VM’s DVD/CD-ROM drive to either a physical drive or an .iso file

Virtual Floppy Drive • It maps a VM’s floppy drive to either a physical drive or an .flp file
Virtual SCSI Controller • VM uses virtual SCSI controller to access virtual disk
Virtual USB Controller • Maps VM’s USB controller to the physical USB controller

20
Virtual Machine Console

• Provides mouse, keyboard, and screen functionality


• Sends power changes (on/off) to the virtual machine (VM)
• Allows access to BIOS of the VM
• Typically used for virtual hardware configuration and troubleshooting issues

21
Resource Management
Resource management

A process of allocating resources from physical machine or clustered


physical machines to virtual machines (VMs) to optimize the utilization of
resources.

• Goals of resource management


• Controls utilization of resources
• Prevents VMs from monopolizing resources
• Allocates resources based on relative priority of VMs
• Resources must be pooled to manage them centrally

22
Resource Pool
Resource pool

It is a logical abstraction of aggregated physical resources that are managed


centrally.

• Created from a physical machine or cluster


• Administrators may create child resource pool or virtual machine
(VM) from the parent resource pool
• Reservation, limit, and share are used to control the resources
consumed by resource pools or VMs

23
Resource Pool Example

Standalone Physical Machine – Machine 1


Parent Pool
CPU = 3000 MHz
Memory = 6GB

Engineering Pool (Child Pool) Finance Pool (Child Pool)


Marketing-Production VM

CPU = 1000 MHz CPU = 1000 MHz CPU = 500 MHz


Memory = 2GB Memory = 2GB Memory = 1GB

Engineering-Test VM Engineering-Production Finance-Test VM Finance-Production VM


VM
CPU = 500 MHz CPU = 500 MHz CPU = 500 MHz CPU = 500 MHz
Memory = 1GB Memory = 1GB Memory = 1GB Memory = 1GB

24
Share, Limit, and Reservation

• Parameters that control the resources consumed by a child resource pool or a virtual
machine (VM) are as follows:
• Share
• Amount of CPU or memory resources a VM or a child resource pool can have
with respect to its parent’s total resources
• Limit
• Maximum amount of CPU and memory a VM or a child resource pool can
consume
• Reservation
• Amount of CPU and memory reserved for a VM or a child resource pool

25
Optimizing CPU Resources

• Modern CPUs are equipped with multiple cores and hyper-threading


• Multi-core processors have multiple processing units (cores) in a single CPU
• Hyper-threading makes a physical CPU appear as two or more logical CPUs
• Allocating a CPU resource efficiently and fairly is critical
• Hypervisor schedules virtual CPUs on the physical CPUs
• Hypervisors support multi-core, hyper-threading, and CPU load-balancing features to
optimize CPU resources

26
Multi-core Processors
VM with VM with VM with
one CPU two CPUs four CPUs

Virtual CPU

Virtual
Physical

Thread Thread Thread Thread Thread Thread Thread Thread


Thread

Core

Socket

Single – core Dual – core Quad – core


Dual – socket system Single – socket system Single – socket system

27
Hyper-threading
VM with VM with VM with
one CPU two CPUs one CPU

• Makes a physical CPU appear as two Logical CPUs


(LCPUs)
• Enables operating system (OS) to schedule two or
more threads simultaneously
• Two LCPUs share the same physical resources
• While the current thread is stalled, CPU can
execute another thread LCPU LCPU

• Hypervisor running on a hyper-threading-enabled CPU


provides improved performance and utilization LCPU LCPU

Thread 1 and 2 Dual – core Thread 1 and 2


Single – socket system
with hyperthreading

28
Optimizing Memory Resource

• Hypervisor manages a machine’s physical memory


• Part of this memory is used by the hypervisor
• Rest is available for virtual machines (VMs)
• VMs can be configured with more memory than physically available, called ‘memory
overcommitment’
• Memory optimization is done to allow overcommitment
• Memory management techniques are Transparent page sharing, memory ballooning, and
memory swapping

29
Memory Ballooning
No memory shortage, balloon remains
uninflated

Virtual Machine (VM)


1. Memory shortage, balloon inflates
2. Driver demands memory from guest
operating system (OS)
3. Guest OS forces page out
4. Hypervisor reclaims memory

Virtual Machine (VM)

1. Memory shortage resolved,


deflates balloon
2. Driver relinquishes memory
3. Guest OS can use pages
4. Hypervisor grants memory
Virtual Machine (VM)

30
Memory Swapping

• Each powered-on virtual machine (VM) needs its own swap file
• Created when the VM is powered-on
• Deleted when the VM is powered-off
• Swap file size is equal to the difference between the memory limit and the VM memory
reservation
• Hypervisor swaps out the VM’s memory content if memory is scarce
• Swapping is the last option because it causes notable performance impact

31
Physical to Virtual Machine (P2V) Conversion
P2V Conversion

It is a process through which physical machines are converted into virtual


machines (VMs).

• Clones data from physical machine’s disk to VM disk


• Performs system reconfiguration of the destination
VM such as:
• Change IP address and computer name
• Install required device drivers to enable the VM Conversion

to boot
Physical Machine Virtual Machine (VM)

32
Benefits of P2V Converter

• Reduces time needed to setup new virtual machine (VM)

• Enables migration of legacy machine to a new hardware without reinstalling


operating system (OS) or application

• Performs migration across heterogeneous hardware

33
Components of P2V Converter
• There are three key components:
• Converter server
• Is responsible for controlling conversion process
• Is used for hot conversion only (when source is running its OS)
• Pushes and installs agent on the source machine
• Converter agent
• Is responsible for performing the conversion
• Is used in hot mode only
• Is installed on physical machine to convert it to virtual machine (VM)
• Converter Boot CD
• Bootable CD contains its operating system (OS) and converter application
• Converter application is used to perform cold conversion

34
Conversion Options

• Hot conversion
• Occurs while physical machine is running
• Performs synchronization
• Copies blocks that were changed during the initial cloning period
• Performs power off at source and power on at target virtual machine (VM)
• Changes IP address and machine name of the selected machine, if both
machines must co-exist on the same network
• Cold conversion
• Occurs while physical machine is not running OS and application
• Boots the physical machine using converter boot CD
• Creates consistent copy of the physical machine

35
Hot Conversion Process
Converter server
running converter
software

Step 1: Converter server


installs agent on source Step 3: Creates VM on
physical machine destination machine

Agent

Step 4: Clones source


disk to VM disk
Powered-on
Source Physical Source
Snapshot
Machine Volume

Snapshot Destination Physical


Machine running
Step 2: Agent takes hypervisor
snapshot of source volume

36
Hot Conversion Process (contd.)
Converter server
running converter
software

Step 6: VM is ready to run


Step 5: Synchronizes and
reconfigures the VM

Reconfiguration
Agent

Powered-on
Source Physical Source
Snapshot
Machine Volume

Snapshot Destination Physical


Machine running
hypervisor

37
Cold Conversion Process

Step 1: Boot physical Step 2: Creates VM on


machine with converter destination machine
boot CD
Converter boot CD

Powered-on
Source Physical Source
Volume
Machine

Destination Physical
Machine (Running
Hypervisor)

38
Cold Conversion Process (contd.)

Step 4: Installs required drivers to Step 5: VM is ready to run


allow OS to boot on VM

Converter boot CD

Reconfiguration

Powered-on
Source Physical Source
Step 3: Clones source
Machine Volume disk to VM disk

Destination Physical
Machine (Running
Hypervisor)

39
Storage Virtualization
Storage virtualization

It is the process of masking the underlying complexity of physical


storage resources and presenting the logical view of these
resources to compute systems.

• Logical to physical storage mapping is performed by virtualization layer


• Virtualization layer abstracts the identity of physical storage devices
• Creates a storage pool from multiple, heterogeneous storage arrays
• Virtual volumes are created from the storage pools and are assigned to the
compute system

40
Benefits of Storage Virtualization

• Adds or removes storage without any downtime


• Increases storage utilization thereby reducing TCO
• Provides non-disruptive data migration between storage devices
• Supports heterogeneous, multi-vendor storage platforms
• Simplifies storage management

41
Storage Virtualization at Different Layers

Layers Examples

Compute • Storage provisioning for VMs

• Block-level virtualization
Network
• File-level virtualization

• Virtual Provisioning
Storage
• Automated Storage Tiering

42
Storage for Virtual Machines
Compute 1 Compute 2

• VMs are stored as set of files on storage space available to VM 4


VM 3
hypervisor
• ‘Virtual disk file’ represents a virtual disk used by a VM to
store its data
Virtual disk Virtual disk Virtual disk Virtual disk

• Size of virtual disk file represents storage space allocated file file file file

NFS
VMFS
to virtual disk
• VMs remain unaware of
FC SAN
• Total space available to the hypervisor IP Network

• Underlying storage technologies

FC Storage iSCSI NAS

43
File System for Managing VM Files

• Hypervisor uses two file systems to manage the VM files


• Hypervisor’s native file system called Virtual Machine File System (VMFS)
• Network File System (NFS) such as NAS file system

44
Network Virtualization
Network Virtualization
It is a process of logically segmenting or grouping physical network(s) and
making them operate as single or multiple independent network(s) called
“Virtual Network(s)”.

• Enables virtual networks to share network resources


• Allows communication between nodes in a virtual network without routing of frames
• Enforces routing for communication between virtual networks
• Restricts management traffic, including ‘Network Broadcast’, from propagating to other
virtual network
• Enables functional grouping of nodes in a virtual network

45
Network Virtualization in VDC
• Involves virtualizing physical
Physical Server Physical Server
and VM networks
Physical Network
• Consists of following physical Hypervisor Hypervisor
components:
 Network adapters, switches, PNIC PNIC

routers, bridges, repeaters,


and hubs
• Provides connectivity Physical
 Among physical servers Network

running hypervisor Client

 Between physical servers and


clients
PNIC – Physical NIC
 Between physical servers and
storage systems Storage Array

46
Benefits of Network Virtualization
Benefit Description
• Restricts access to nodes in a virtual network from
another virtual network
Enhances security
• Isolates sensitive data from one virtual network to
another
• Restricts network broadcast and improves virtual
Enhances performance
network performance
• Allows configuring virtual networks from a centralized
Improves manageability management workstation using management software
• Eases grouping and regrouping of nodes
• Enables multiple virtual networks to share the same
physical network, which improves utilization of
Improves utilization and reduces
network resource
CAPEX
• Reduces the requirement to setup separate physical
networks for different node groups 47
Components of VDC Network Infrastructure
• VDC network infrastructure includes both virtual and physical network
components
 Components are connected to each other to enable network traffic flow

Component Description
• Connects VMs to the VM network
Virtual NIC
• Sends/receives VM traffic to/from VM network
Virtual HBA • Enables a VM to access FC RDM disk/LUN assigned to the VM
• Is an Ethernet switch that forms VM network
• Provides connection to virtual NICs and forwards VM traffic
Virtual switch
• Provides connection to hypervisor kernel and directs hypervisor traffic:
management, storage, VM migration
Physical adapter: NIC, • Connects physical servers to physical network
HBA, CNA • Forwards VM and hypervisor traffic to/from physical network
• Forms physical network that supports Ethernet/FC/iSCSI/FCoE
Physical switch, router • Provides connections among physical servers, between physical servers and
storage systems, and between physical servers and clients 48
Virtual Network Component: Virtual NIC

• Connects VMs to virtual switch


• Forwards Ethernet frames to virtual switch
• Has unique MAC and IP addresses
• Supports Ethernet standards similar to physical NIC

49
Overview of Desktop and Application Virtualization

Tight dependency Virtualization breaks


between the layers dependencies between the
layers
User State (data and settings)

Application
Application Virtualization
Isolate the application from OS and hardware

Operating System
Desktop Virtualization
Isolate hardware from OS, application and user
state
Hardware

50
50
Desktop Virtualization
Desktop Virtualization

Technology which enables detachment of the user state, the


Operating System (OS), and the applications from endpoint devices.

• Enables organizations to host and centrally manage desktops


• Desktops run as virtual machines within the VDC
• They may be accessed over LAN/WAN
• Endpoint devices may be thin clients or PCs

51
51
Benefits of Desktop Virtualization

• Enablement of thin clients


• Improved data security
• Simplified data backup
• Simplified PC maintenance
• Flexibility of access

52
Desktop Virtualization Techniques

• Technique 1: Remote Desktop Services(RDS)


• Technique 2: Virtual Desktop Infrastructure (VDI)
• Desktop virtualization techniques provide ability to centrally host and manage
desktop environments
• Deliver them remotely to the user’s endpoint devices

53
Remote Desktop Services
• RDS is traditionally known as terminal services
• A terminal service runs on top of a Windows
installation
 Provides individual sessions to client systems
 Clients receive visuals of the desktop
 Resource consumption takes place on the server

54
Benefits of Remote Desktop Services

• Rapid application delivery


• Applications are installed on the server and accessed from there
• Improved security
• Applications and data are stored in the server
• Centralized management
• Low-cost technology when compared to VDI

55
Virtual Desktop Infrastructure(VDI)

• VDI involves hosting desktop which runs as VM on the server in the VDC
 Each desktop has its own OS and applications installed
• User has full access to resources of virtualized desktop

56
VDI: Components
VM execution
• Endpoint devices Endpoint devices server

• VM hosting/execution servers
• Connection Broker
Connection
broker Shared
Storage

PCs,
notebooks
thin clients

57
How does this work?

Server (HW1.js)
Server
require('http');
http.createServer
(…)

Internet Amazon SimpleDB


Web page (home.ejs)
<html><head>
<body>…
DOM
function foo() { accesses
$("#id").html("x");
}
Browser
Script (app.js)
Your VM

58
Use case Scenario for virtualization

Cust 1

Cust 2
Admin
Physical machine
Cust 3

• Suppose Admin has a machine with 4 CPUs and 8 GB of memory, and three customers:
• Cust 1 wants a machine with 1 CPU and 3GB of memory
• Cust 2 wants 2 CPUs and 1GB of memory
• Cust 3 wants 1 CPU and 4GB of memory
• What should Alice do?
59
Resource allocation in virtualization

Cust 1
Virtual
machine
monitor
Cust 2
Admin
Physical machine
Virtual machines Cust 3

• Admin can sell each customer a virtual machine (VM) with the requested resources
• From each customer's perspective, it appears as if they had a physical machine all by
themselves (isolation)

60
How does it work?
VM 1 VM 2
VM Virt Phys App
1 0-99 0-99 App App
1 299-399 100-199
2 0-99 300-399 OS 1 OS 2
2 200-299 500-599
2 600-699 400-499
VMM
Translation table
Physical machine

• Resources (CPU, memory, ...) are virtualized


• VMM ("Hypervisor") has translation tables that map requests for virtual resources to
physical resources
• Example: VM 1 accesses memory cell #323; VMM maps this to memory cell 123.
• For which resources does this (not) work?
• How do VMMs differ from OS kernels?

61
Benefit: Migration in case of disaster

Cust 1
Virtual
machine
Admin monitor
Cust 2

Virtual machines Cust 3

Physical machines

• What if the machine needs to be shut down?


• e.g., for maintenance, consolidation, ...
• Admin can migrate the VMs to different physical machines without any customers noticing

62
Benefit: Time sharing

Cust 4

Cust 1
Virtual
machine
monitor
Cust 2
Admin
Physical machine

• What if Admin gets another customer? Virtual machines Cust 3

• Multiple VMs can time-share the existing resources


• Result: Admin has more virtual CPUs and virtual memory than physical resources (but not
all can be active at the same time)

63
Benefit and challenge: Isolation
Cust 4

Cust 1
VMM
Cust 2
Admin
Physical machine
Virtual machines Cust 3

• Good: Cust 4 can't access Cust 3’s data


• Bad: What if the load suddenly increases?
• Example: Cust 4 VM shares CPUs with Cust 3's VM, and Cust 3 suddenly starts a large
compute job
• Cust 4 performance may decrease as a result
• VMM can move Cust 4 's software to a different CPU, or migrate it to a different machine

64
Recap: Virtualization in the cloud
• Gives cloud provider a lot of flexibility
• Can produce VMs with different capabilities
• Can migrate VMs if necessary (e.g., for maintenance)
• Can increase load by overcommitting resources
• Provides security and isolation
• Programs in one VM cannot influence programs in another
• Convenient for users
• Complete control over the virtual 'hardware' (can install own operating system own
applications, ...)
• But: Performance may be hard to predict
• Load changes in other VMs on the same physical machine may affect the performance seen
by the customer

65
Introduction to Virtualisation

• AGENDA
Types of Virtualization
x86 Hardware Virtualization
Manage the resources for the SaaS, PaaS and IaaS models
Introduction to NFV – VNF

66 BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Introduction
• What is Virtualization?
• Virtualization is the creation of a virtual resource or device where the framework
divides the resource into one or more execution environments

• Examples of Virtualization
• Virtual drives
• Virtual memory
• Virtual machines
• Virtual servers

• Why is it popular?

• Types of Virtualization

67
Hardware Based Virtualization

68
Hardware Based Virtualization
• The different logical layers of operating system-based virtualization, in which the VM is first
installed into a full host operating system and subsequently used to generate virtual machines.
• An abstract execution environment in terms of computer hardware in which guest OS can be
run, referred to as Hardware-level virtualization.
• In this, an operating system represents the guest, the physical computer hardware represents a
host, its emulation represents a virtual machine, and the hypervisor represents the Virtual
Machine Manager.
• When the virtual machines are allowed to interact with hardware without any intermediary
action requirement from the host operating system generally makes hardware-based
virtualization more efficient.
• A fundamental component of hardware virtualization is the hypervisor, or virtual machine
manager (VMM).

69
Hardware Based Virtualization

70
Hardware Based Virtualization

• Type-I hypervisors:

• Hypervisors of type I run directly on top of the hardware.

• As a result, they stand in for operating systems and communicate directly with the ISA
interface offered by the underlying hardware, which they replicate to allow guest operating
systems to be managed.

• Because it runs natively on hardware, this sort of hypervisor is also known as a native virtual
machine.

71
Hardware Based Virtualization

• Type-II hypervisors:

• To deliver virtualization services, Type II hypervisors require the assistance of an operating


system.

• This means they’re operating system-managed applications that communicate with it via
the ABI and simulate the ISA of virtual hardware for guest operating systems.

• Because it is housed within an operating system, this form of hypervisor is also known as a
hosted virtual machine.

72
Hardware Based Virtualization

• A hypervisor has a simple user interface that needs some storage space.

• It exists as a thin layer of software and to establish a virtualization management layer, it


does hardware management function.

• For the provisioning of virtual machines, device drivers and support software are optimized
while many standard operating system functions are not implemented.

• Essentially, to enhance performance overhead inherent to the coordination which allows


multiple VMs to interact with the same hardware platform this type of virtualization system
is used.

73
Hardware Based Virtualization
• Hardware compatibility is another challenge for hardware-based virtualization.

• The virtualization layer interacts directly with the host hardware, which results that all the
associated drivers and support software must be compatible with the hypervisor.

• As hardware devices drivers available to other operating systems may not be available to
hypervisor platforms similarly.

• Moreover, host management and administration features may not contain the range of
advanced functions that are common to the operating systems.

• Note: Hyper-V communicates with the underlying hardware mostly through vendor-supplied
drivers.

74
Features of hardware-based virtualization are:
• Isolation: Hardware-based virtualization provides strong isolation between virtual machines,
which means that any problems in one virtual machine will not affect other virtual machines
running on the same physical host.

• Security: Hardware-based virtualization provides a high level of security as each virtual


machine is isolated from the host operating system and other virtual machines, making it
difficult for malicious code to spread from one virtual machine to another.

• Performance: Hardware-based virtualization provides good performance as the hypervisor


has direct access to the physical hardware, which means that virtual machines can achieve
close to native performance.

• Resource allocation: Hardware-based virtualization allows for flexible allocation of hardware


resources such as CPU, memory, and I/O bandwidth to virtual machines.
75
Hardware Based Virtualization
• Snapshot and migration: Hardware-based virtualization allows for the creation of snapshots,
which can be used for backup and recovery purposes.

• It also allows for live migration of virtual machines between physical hosts, which can be
used for load balancing and other purposes.

• Support for multiple operating systems: Hardware-based virtualization supports multiple


operating systems, which allows for the consolidation of workloads onto fewer physical
machines, reducing hardware and maintenance costs.

• Compatibility: Hardware-based virtualization is compatible with most modern operating


systems, making it easy to integrate into existing IT infrastructure.

76
Advantages and disadvantages of HBV
• It reduces the maintenance overhead of paravirtualization as it reduces (ideally, eliminates)
the modification in the guest operating system.

• It is also significantly convenient to attain enhanced performance.

• Disadvantages of hardware-based virtualization


Hardware-based virtualization requires explicit support in the host CPU, which may not
available on all x86/x86_64 processors.

• A “pure” hardware-based virtualization approach, including the entire unmodified guest


operating system, involves many VM traps, and thus a rapid increase in CPU overhead occurs
which limits the scalability and efficiency of server consolidation.

• This performance hit can be mitigated by the use of para-virtualized drivers; the combination
has been called “hybrid virtualization”.
77
Hypervisor

• The Hypervisor is the piece of


software that enables
virtualization

• It allows the host machine to


allocate resources to guest
machines

78
Hypervisor
Type I versus Type II Hypervisor

79
Virtualization Hardware
• CPU
• At least one CPU core per virtual machine
• Having free cores for high stress situations recommended

• RAM
• No set amount for RAM
• Estimate minimum amounts of RAM and upgrade based on performance

• Networking

• Multiple network cards required for increased throughput

• Measure peak traffic amounts

• Network Virtualization

80
Virtualization Hardware
• Storage
• Local storage on servers is limited

• Allow for 20% extra storage space for VM


files and server snapshots

• Storage Networks (highly recommended)

• Storage Area Network (SAN) – Large


data transfers

• Network Attached Storage (NAS) –


File-based data storage

81
Advantages of Server Virtualization
• Reduce number of servers

• Reduce TCO

• Improve availability and business continuity

• Increase efficiency for development and test environments

82
Types of Server Virtualization

• Server Virtualization is the partitioning of a physical server into a number of small


virtual servers, each running its own operating system.

• These operating systems are known as guest operating systems.

• These are running on another operating system known as the host operating
system.

• Each guest running in this manner is unaware of any other guests running on the
same host.

• Different virtualization techniques are employed to achieve this transparency.

83
Types of Server Virtualization
1. Hypervisor

• A Hypervisor or VMM(virtual machine monitor) is a layer that exists between the operating system
and hardware.

• It provides the necessary services and features for the smooth running of multiple operating
systems.

• It identifies traps, responds to privileged CPU instructions, and handles queuing, dispatching, and
returning the hardware requests.

• A host operating system also runs on top of the hypervisor to administer and manage the virtual
machines.

84
Types of Server Virtualization
2. Para Virtualization
• It is based on Hypervisor.

• Much of the emulation and trapping overhead in software implemented virtualization is


handled in this model.

• The guest operating system is modified and recompiled before installation into the virtual
machine.

• Due to the modification in the Guest operating system, performance is enhanced as the
modified guest operating system communicates directly with the hypervisor and emulation
overhead is removed.

• Example: Xen primarily uses Paravirtualization, where a customized Linux environment is used
to support the administrative environment known as domain 0.
85
Types of Server Virtualization

Advantages:

•Easier

•Enhanced Performance

•No emulation overhead

Limitations:

•Requires modification to a guest operating system

86
Types of Server Virtualization
3. Full Virtualization
• It is very much similar to Paravirtualization.
• It can emulate the underlying hardware when necessary.
• The hypervisor traps the machine operations used by the operating system to perform I/O
or modify the system status.
• After trapping, these operations are emulated in software and the status codes are returned
very much consistent with what the real hardware would deliver.
• This is why an unmodified operating system is able to run on top of the hypervisor.
• Example: VMWare ESX server uses this method.
• A customized Linux version known as Service Console is used as the administrative operating
system.
• It is not as fast as Paravirtualization.

87
Types of Server Virtualization

Advantages:

• No modification to the Guest operating system


is required.

Limitations:

• Complex

• Slower due to emulation

• Installation of the new device driver is difficult.

88
Types of Server Virtualization
4. Hardware-Assisted Virtualization

• It is similar to Full Virtualization and Paravirtualization in terms of operation except that it


requires hardware support.

• Much of the hypervisor overhead due to trapping and emulating I/O operations and status
instructions executed within a guest OS is dealt with by relying on the hardware extensions of
the x86 architecture.

• Unmodified OS can be run as the hardware support for virtualization would be used to handle
hardware access requests, privileged and protected operations, and to communicate with the
virtual machine.

89
Types of Server Virtualization
• Examples: AMD – V Pacifica and Intel VT Vanderpool provide hardware support for
virtualization.

• Advantages:

• No modification to a guest operating system is required.

• Very less hypervisor overhead

• Limitations:

• Hardware support Required

90
Types of Server Virtualization
5. Kernel level Virtualization
• Instead of using a hypervisor, it runs a separate version of the Linux kernel and sees the associated virtual
machine as a user-space process on the physical host.

• This makes it easy to run multiple virtual machines on a single host.

• A device driver is used for communication between the main Linux kernel and the virtual machine.

• Processor support is required for virtualization ( Intel VT or AMD – v).

• A slightly modified QEMU process is used as the display and execution containers for the virtual machines.

• In many ways, kernel-level virtualization is a specialized form of server virtualization.

• Examples: User – Mode Linux( UML ) and Kernel Virtual Machine( KVM )
91
Types of Server Virtualization

Advantages:

•No special administrative software is required.

•Very less overhead

Limitations:

•Hardware Support Required

92
Types of Server Virtualization
6. System Level or OS Virtualization
• Runs multiple but logically distinct environments on a single instance of the operating system kernel.

• Also called shared kernel approach as all virtual machines share a common kernel of host operating system.

• Based on the change root concept “chroot”.

• chroot starts during bootup.

• The kernel uses root filesystems to load drivers and perform other early-stage system initialization tasks.

• It then switches to another root filesystem using chroot command to mount an on-disk file system as its final root
filesystem and continue system initialization and configuration within that file system.

93
Types of Server Virtualization
• The chroot mechanism of system-level virtualization is an extension of this concept.

• It enables the system to start virtual servers with their own set of processes that execute
relative to their own filesystem root directories.

• The main difference between system-level and server virtualization is whether different
operating systems can be run on different virtual systems.

• If all virtual servers must share the same copy of the operating system it is system-level
virtualization and if different servers can have different operating systems ( including different
versions of a single operating system) it is server virtualization.

• Examples: FreeVPS, Linux Vserver, and OpenVZ.

94
Types of Server Virtualization
Advantages:
• Significantly lightweight than complete machines(including a kernel)
• Can host many more virtual servers
• Enhanced Security and isolation
• Virtualizing an operating system usually has little to no overhead.
• Live migration is possible with OS Virtualization.
• It can also leverage dynamic container load balancing between nodes and clusters.
• On OS virtualization, the file-level copy-on-write (CoW) method is possible, making it easier to
back up data, more space-efficient, and easier to cache than block-level copy-on-write
schemes.

Limitations:
• Kernel or driver problems can take down all virtual servers.
95
Brief History of the x86 Architecture
• The x86 architecture has roots that reach back to 8‐bit processors built by Intel in the late 1970s.
• As manufacturing capabilities improved and software demands increased, Intel extended the 8‐bit architecture
to 16 bits with the 8086 processor.
• Later still, with the arrival of the 80386 CPU in 1985, Intel extended the architecture to 32 bits.
• Intel calls this architecture IA‐32, but the vendor‐neutral term x86 is also common.
• Over the following two decades, the basic 32‐bit architecture remained the same, although successive
generations of CPUs added many new features, including an on‐chip floating point unit, support for large
physical memories through physical address extension (PAE), and vector instructions.
• In 2003, AMD introduced a 64‐bit extension to the x86 architecture, initially dubbed AMD64, and began
shipping 64‐bit Opteron CPUs in 2004.
• Later in 2004, Intel announced its own 64‐bit architectural extension of IA‐32, calling it IA‐32e and later also
EM64T.
• The AMD and Intel 64‐bit extensions are extremely similar, although they differ in some minor ways, one of
which is crucial for virtualization

96
x86 Hardware Virtualization
• Microsoft Virtual Server (2005)
• Came with Microsoft Server 2003
• Did not scale well with 64 bit systems
• Replaced by Hyper-V

• Microsoft Hyper-V (2008 & 2012)


• Hyper-V is short for Hypervisor
• Free release with Server 2008 and 2012
• Best option for Microsoft based virtualization
• Microsoft Hyper-V Server 2016
• Microsoft Hyper-V on Windows Server 2019
Hyper-V Architecture

97
Introduction hyper-converged infrastructure (HCI)

• With VMM 2022, we can manage Azure Stack HCI, 21H2 clusters.

• Azure Stack HCI, version 21H2 is the newly introduced hyper-converged infrastructure (HCI) Operating system
that runs on on-premises clusters with virtualized workloads.

• Most of the operations to manage Azure Stack clusters in VMM are similar to managing Windows Server
clusters.

• Azure Stack HCI is Microsoft’s premier hypervisor offering for running virtual machines on-premises. For
testing and evaluation purposes Azure Stack HCI includes a 60-day free trial and can be downloaded here:
https://azure.microsoft.com/en-us/products/azure-stack/hci/hci-download/

• Microsoft Hyper-V Server 2019 will continue to be supported under its lifecycle policy until January 2029, see
this link for additional information: https://docs.microsoft.com/en-us/lifecycle/products/hyperv-server-2019

98
Virtualization Software
• VMware (Company)
• Releases most popular line of virtualization software
• First company to utilize virtualization on x86 machines
• Software runs on Linux, Windows, and MAC

• vSphere (aka ESX)


• Costly
• High overhead

• VMware Server
• Free
• Not as powerful as ESX or ESXi

99
Introduction to virtualization and resource management in IaaS
The Rise of Resource Overcommitment

• In general terms, it is the allocation of more virtual resources to a machine or a group of


machines than are physically present.

• “Resource overcommitment” is best used to make use of the resources present in a


virtualized cloud infrastructure.

• As most applications will never use all the resources allocated to them at all times.

• The resources provided by a provider will remain idle without “overcommitment.”

100
Resource Management in IaaS
• Resource management is an indispensable way to make use of the underlying hardware of
the cloud effectively.

• A resource manager oversees physical resources allocation to the virtual machines deployed
on a cluster of nodes in the cloud.

• The resource management systems have differing purposes depending upon the
requirements.

• Using physical machines reduces operational costs and can be accomplished through the
overcommitment of resources.

• However, resource overcommitment comes with new challenges such as removal of the
hotspot and the dilemma of where to schedule new incoming VMs to reduce the chances of
the hotspot.
101
Resource Management in IaaS

102
Mitigating the Challenge of Hotspot

• Live migration and memory ballooning of VM help in minimizing hotspots.

• Ballooning can be used if a VM is low on memory to take away some memory from one guest
on the same host, which has some free memory, and provide it to the needy guest.

• But, if none of the guests have enough free memory, then most of the time, the host is
overloaded.

• In that case, a guest has to be migrated from the current host to a different host while
keeping an account of the complete load of the cluster.

103
How does PaaS compare to internally hosted development environments?

• PaaS can be accessed over any internet


connection, making it possible to build an
entire application in a web browser.

• Because the development environment is


not hosted locally, developers can work on
the application from anywhere in the world.

• This enables teams that are spread out


across geographic locations to collaborate.

• It also means developers have less control


over the development environment, though
this comes with far less overhead.
104
What is included in PaaS?
The main offerings included by PaaS vendors are:

• Development tools
• Middleware
• Operating systems
• Database management
• Infrastructure

Management model for PaaS

• Different vendors may include other services as well, but these are the core PaaS services.
105
Development tools in PaaS?
• PaaS vendors offer a variety of tools that are necessary for software development, including a
source code editor, a debugger, a compiler, and other essential tools.
• These tools may be offered together as a framework.
• The specific tools offered will depend on the vendor, but PaaS offerings should include
everything a developer needs to build their application.

Middleware
• Platforms offered as a service usually include middleware, so that developers don't have to
build it themselves.
• Middleware is software that sits in between user-facing applications and the machine's
operating system; for example, middleware is what allows software to access input from the
keyboard and mouse.
• Middleware is necessary for running an application, but end users don't interact with it.

106
Development tools in PaaS?
Operating systems
A PaaS vendor will provide and maintain the operating system that developers work on and the
application runs on.

Databases
• PaaS providers administer and maintain databases.

• They will usually provide developers with a database management system as well.

Infrastructure
• PaaS is the next layer up from IaaS in the cloud computing service model, and everything
included in IaaS is also included in PaaS.

• A PaaS provider either manages servers, storage, and physical data centers, or purchases
them from an IaaS provider.
107
Why do developers use PaaS?
Faster time to market
• PaaS is used to build applications more quickly than would be possible if developers had to
worry about building, configuring, and provisioning their own platforms and backend
infrastructure.

• With PaaS, all they need to do is write the code and test the application, and the vendor
handles the rest.

One environment from start to finish

• PaaS permits developers to build, test, debug, deploy, host, and update their applications all
in the same environment.

• This enables developers to be sure a web application will function properly as hosted before
they release, and it simplifies the application development lifecycle.
108
Price
• PaaS is more cost-effective than leveraging IaaS in many cases.

• Overhead is reduced because PaaS customers don't need to manage and provision virtual
machines.

• In addition, some providers have a pay-as-you-go pricing structure, in which the vendor only
charges for the computing resources used by the application, usually saving customers
money.

• However, each vendor has a slightly different pricing structure, and some platform providers
charge a flat fee per month.

Ease of licensing

PaaS providers handle all licensing for operating systems, development tools, and everything
else included in their platform.
109
What are the potential drawbacks of using PaaS?
Vendor lock-in

• It may become hard to switch PaaS providers, since the application is built using the vendor's
tools and specifically for their platform.

• Each vendor may have different architecture requirements.

• Different vendors may not support the same languages, libraries, APIs, architecture, or
operating system used to build and run the application.

• To switch vendors, developers may need to either rebuild or heavily alter their application.

110
What are the potential drawbacks of using PaaS?
Vendor lock-in

• It may become hard to switch PaaS providers, since the application is built using the vendor's
tools and specifically for their platform.

• Each vendor may have different architecture requirements.

• Different vendors may not support the same languages, libraries, APIs, architecture, or
operating system used to build and run the application.

• To switch vendors, developers may need to either rebuild or heavily alter their application.

111
What are the potential drawbacks of using PaaS?
Vendor dependency

• The effort and resources involved in changing PaaS vendors may make companies more
dependent on their current vendor.

• A small change in the vendor's internal processes or infrastructure could have a huge impact
on the performance of an application designed to run efficiently on the old configuration.

• Additionally, if the vendor changes their pricing model, an application may suddenly become
more expensive to operate.

112
What are the potential drawbacks of using PaaS?
Security and compliance challenges

In a PaaS architecture, the external vendor will store most or all of an application's data, along
with hosting its code.

In some cases the vendor may actually store the databases via a further third party, an IaaS
provider.

Though most PaaS vendors are large companies with strong security in place, this makes it
difficult to fully assess and test the security measures protecting the application and its data.

In addition, for companies that have to comply with strict data security regulations, verifying the
compliance of additional external vendors will add more hurdles to going to market.

113
SaaS
• Software-as-a-service (SaaS), also known as cloud application services, is the most
comprehensive form of cloud computing services, delivering an entire application that is
managed by a provider, via a web browser.

• Software updates, bug fixes, and general software maintenance are handled by the provider
and the user connects to the app via a dashboard or API.

• There’s no installation of the software on individual machines and group access to the
program is smoother and more reliable.

• You’re already familiar with a form of SaaS if you have an email account with a web-based
service like Outlook or Gmail, for example, as you can log into your account and get your
email from any computer, anywhere.

114
SaaS
• SaaS is a great option for small businesses who don’t have the staff or bandwidth to handle
software installation and updates, as well as for applications that don’t require much
customization or that will only be used periodically.

• What SaaS saves you in time and maintenance, however, it could cost you in control, security,
and performance, so it’s important to choose a provider you can trust.

• Dropbox, Salesforce, Google Apps, and Red Hat Insights are some examples of SaaS.

115
Cloud Computing

116
Cloud Computing (continued)

117
Hyper scale Infrastructure is the enabler
27 Regions Worldwide, 22 ONLINE…huge capacity around the world…growing every year

North Central US
Illinois
West Europe
United Kingdom
Canada Central Netherlands
Canada East Regions
Central US Toronto
Iowa Quebec City Germany North East
Magdeburg China North *
US Gov Beijing
Iowa
Germany Central Japan East
North Europe China South *
Frankfurt Tokyo, Saitama
Ireland Shanghai
West US East US
California Virginia
India Central Japan West
Pune Osaka
East US 2
South Central US Virginia India South
Texas US Gov Chennai
India West
Virginia
Mumbai East Asia
Hong Kong

SE Asia
Singapore

Australia East
New South Wales

Brazil South
Sao Paulo State Australia South East

◼ 100+ datacenters Victoria

◼ Top 3 networks in the world Operational


◼ 2.5x AWS, 7x Google DC Regions Announced/Not Operational
◼ G Series – Largest VM in World, 32 cores, 448GB Ram, SSD… * Operated by 21Vianet
How are Microsoft Azure Charges Incurred?
• Pay only for what you use*

• VM usage is by the minute

• VMs (IaaS only) that are stopped in Microsoft Azure, only storage charges apply

*Microsoft Azure Enterprise Agreement (EA) billing process differs

119
Microsoft Azure Compute

120
Microsoft Azure App Service
• App Service – fully managed platform in Azure for web, mobile and integration scenarios.
This includes
• Web Apps – Enterprise grade web applications

• API Apps – API apps in Azure App Service are used to develop, publish, manage, and
monetize APIs.

• Mobile Apps - Build native and cross platform apps for iOS, Android, and Windows apps
or cross-platform Xamarin or Cordova (Phonegap) apps

• Logic Apps (preview) - Allows developers to design workflows that articulate intent via a
trigger and series of steps, each invoking an App Service API app
121
Microsoft Azure Cloud Services
• Role – a configuration passed to Azure to tell Azure how many machines of which size and
configuration to build for you
• Web Role – Virtual machine with IIS installed
• Worker Role – Virtual machine without IIS installed
• Ability to mix together multiple role configurations within a single Cloud Service

• Package – Source code binaries are packaged and sent with the configuration file to Azure
• Highly scalable – can exceed number of machines capability of App Service Web Apps
• Allows RDP into individual VMs
• Cloud Services are also used to contain IaaS virtual machines (Classic)

122
High Level view of Virtual Machine Services
• Compute resources
• Virtual Machines
• VM Extensions

• Storage Resources
• Blobs, tables, queues and Files functionality
• Storage accounts (blobs) – Standard & Premium Storage

• Networking Resources
• Virtual networks
• Network interface cards (NICs)
• Load balancers
• IP addresses
• Network Security Groups

123
Management model for PaaS/IaaS

ARM with Resource Providers

124
Introduction to Network Function Virtualization(NFV-VNF)

1. What is NFV?
Overview 2. Why We need NFV?
3. Concepts, Architecture, Requirements

125
Four Innovations of NFV

4. Standard API’s between Modules

3. Implementation in Virtual Machines

2. Network Function Modules

1. Software implementation of network

126
Network Function Virtualization (NFV)
1. Fast standard hardware Software based Devices
Routers, Firewalls, Broadband Remote Access Server (BRAS)
A.k.a. white box implementation

2. Function Modules (Both data plane and control plane)


DHCP (Dynamic Host control Protocol), NAT (Network Address Translation), Rate Limiting,

vBase Stations
Residential Set Top
DNS DHCP CDN
LTE 3G 2G Gateway NAT Box

Hardware
Hardware Hardware

Ref: ETSI, “NFV – Update White Paper V3,” Oct 2014, http://portal.etsi.org/NFV/NFV_White_Paper3.pdf (Must read)

127
NFV (Cont)
3. Virtual Machine implementation
Virtual appliances
All advantages of virtualization (quick provisioning, scalability, mobility, Reduced CapEx,
Reduced OpEx, …)

VM VM VM
Hypervisor

Partitioning

4. Standard APIs: New ISG (Industry Specification Group) in ETSI (European Telecom
Standards Institute) set up in November 2012

128
Why We need NFV?
1. Virtualization: Use network resource without worrying about where it is
physically located, how much it is, how it is organized, etc.
2. Orchestration: Manage thousands of devices
3. Programmable: Should be able to change behavior on the fly.
4. Dynamic Scaling: Should be able to change size, quantity
5. Automation
6. Visibility: Monitor resources, connectivity
7. Performance: Optimize network device utilization
8. Multi-tenancy
9. Service Integration
10.Openness: Full choice of Modular plug-ins
Note: These are exactly the same reasons why we need SDN.

129
VNF
• NFV Infrastructre (NFVI): Hardware and software required to deploy, mange and execute
VNFs

• Network Function (NF): Functional building block with a well defined interfaces and well
defined functional behavior

• Virtualized Network Function (VNF): Software implementation of NF that can be deployed


in a virtualized infrastructure

• Container: VNF is independent of NFVI but needs a container software on NFVI to be able to
run on different hardwares
VNF

Container

NFVI

130
NFV Concepts
• Containers Types: Related to Computation, Networking, Storage

• VNF Components (VNFC): A VNF may have one or more components

• VNF Set: Connectivity between VNFs is not specified, e.g., residential gateways

• VNF Forwarding Graph: Service chain when network connectivity order is important,
e.g., firewall, NAT, load balancer

VNFC 1
V NFC 1
V NFC 1
Load V NFC 1
VNFC 1 VNFC 2 VNFC 3 Balancer VNFC 1

Ref: ETSI, “Architectural Framework,” 2015 , http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_NFV002v010201p.pdf


Ref: ETSI, “NFV Terminology for Main Concepts in NFV,” 2015,
http://www.etsi.org/deliver/etsi_gs/NFV/001_099/003/01.02.01_60/gs_NFV003v010201p.pdf

131
NFV Architecture

NFV Management and Orchestration


Os-Ma
OSS/BSS
Orchestration

Service VNF and Se-Ma


EMS 1 EMS 2 EMS 3 Or-Vnfm
Infrastructure
Description
Ve-Vnfm
VNF 1 VNF 2 VNF 3
VNF Managers
Vn-Nf
Or-Vi
NFVI
Vi-Vnfm
Virtual Computing Virtual Storage Virtual Network
Nf-Vi
Virtualized
Virtualization Layer Infrastructure
Managers
VI-Ha

Computing Hardware Storage Hardware Network Hardware

Execution Reference Points Main NFV Reference Points Other NFV Reference Points

Ref: ETSI, “Architectural Framework,” 2015,


http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_NFV002v010201p.pdf

132
NFV Framework Requirements
1. General: Partial or full Virtualization, Predictable performance

2. Portability: Decoupled from underlying infrastructure

3. Performance: as described and facilities to monitor

4. Elasticity: Scalable to meet SLAs. Movable to other servers.

5. Resiliency: Be able to recreate after failure. Specified packet loss rate, calls drops, time to
recover, etc.

6. Security: Role-based authorization, authentication

133
NFV Framework Requirements (Cont)
7. Service Continuity: Seamless or non-seamless continuity after failures or migration

8. Service Assurance: Time stamp and forward copies of packets for Fault detection

9. Energy Efficiency Requirements: Should be possible to put a subset of VNF in a power


conserving sleep state

10. Transition: Coexistence with Legacy and Interoperability among multi-vendor


implementations

11.Service Models: Operators may use NFV infrastructure operated by other operators

134
Any Function Virtualization (FV)
• Network function virtualization of interest to Network service providers

• But the same concept can be used by any other industry, e.g., financial industry, banks, stock
brokers, retailers, mobile games, …

• Everyone can benefit from:

• Functional decomposition of there industry

• Virtualization of those functions

• Service chaining those virtual functions (VFs)

• A service provided by the next gen ISPs


135
Enterprise App Market: Lower CapEx

Virtual IP
Multimedia
System

136
Summary
1. NFV aims to reduce OpEx by automation and scalability provided by implementing
network functions as virtual appliances

2. NFV allows all benefits of virtualization and cloud computing including orchestration, scaling,
automation, hardware independence, pay-per-use, fault-tolerance, …

3. NFV and SDN are independent and complementary. You can do either or both.

4. NFV requires standardization of reference points and interfaces to be able to mix and match
VNFs from different sources

5. NFV can be done now.

6. Several of virtual functions have already been demonstrated by carriers.


137
Virtualization in the cloud

Virtualization | Virtualization Technology | What Is Virtualization | Simplilearn - YouTube

138
• Type -1 Hypervisor
• Windows Sandbox – Light weight virtual Machine isolated, temporary virtual environment
• Hyper-V Hyper-V virtual machines, Hypervisor for bare metal virtualisation

• Type -2 Hypervisor
• Oracle Virtual Box
• Ubuntu 22.04 LTS Jammy Jellyfish – With KVM Virtualization Running a Debian Distro
inside it with VMM- for live monitoring - Nested Paging, KVM Paravirtualization.
• Ubuntu 23.04 Lunar Lobster to save the state of the virtual machines -Nested Paging,,
KVM Paravirtualization.
• Storage Virtualisation – Oracle Database 23c VM appliance for data persistence Nested
Paging, PAE/NX, KVM Paravirtualization.

139
Text and References
T1 Mastering Cloud Computing: Foundations and Applications Programming
Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi

R1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)

140
141
IMP Note to Self

142
Cloud Computing
CS -3

BITS Pilani Faculty Name: Prof. Pradnya Kashikar


pradnyak@wilp.bits-Pilani.ac.in
Today’s session

Contact List of Topic Title Text/Ref Book/


Hour (from content structure in Part A) external resource
5 2.5. x86 Hardware Virtualization T1: Ch9
6 2.6. Manage the resources for the T1: Ch9
SaaS, PaaS and IaaS models

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


IMP Note to Self

Cloud Computing 3
IMP Note to Students
➢It is important to know that just login to the session does not
guarantee the attendance.
➢Once you join the session, continue till the end to consider you
as present in the class.
➢IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT

4
x86 Hardware Virtualization

• An operating system (OS) is designed to run on a


bare-metal hardware and to fully own the hardware
• x86 architecture offer four levels of privilege
• Ring 0, 1, 2, and 3 Ring 3 User Apps
• User applications run in Ring 3
• OS run in Ring 0 (most privileged) Ring 2

• Challenges of virtualizing x86 hardware Ring 1


• Requires placing the virtualization layer below the OS layer Ring 0 OS
• Is difficult to capture and translate privileged OS
instructions at runtime X86 Hardware
• Techniques to virtualize compute
• Full, Para, and hardware assisted virtualization

5
Full Virtualization

• Virtual Machine Monitor (VMM) runs in the privileged Ring 0


Ring 3 User Apps
• VMM decouples guest operating system (OS) from the
underlying physical hardware Ring 2
• Each VM is assigned a VMM Ring 1 Guest OS
• Provides virtual components to each VM
Hypervisor
• Performs Binary Translation (BT) of non-virtualizable OS Ring 0
instructions
Physical Machine
• Guest OS is not aware of being virtualized X86 Hardware

6
Recap
• What is Virtualization?
• Virtualization is the creation of a virtual resource or device where the framework
divides the resource into one or more execution environments

• Examples of Virtualization
• Virtual drives
• Virtual memory
• Virtual machines
• Virtual servers

• Why is it popular?

• Types of Virtualization

7
Brief History of the x86 Architecture
• The x86 architecture has roots that reach back to 8‐bit processors built by Intel in the late 1970s.
• As manufacturing capabilities improved and software demands increased, Intel extended the 8‐bit architecture
to 16 bits with the 8086 processor.
• Later still, with the arrival of the 80386 CPU in 1985, Intel extended the architecture to 32 bits.
• Intel calls this architecture IA‐32, but the vendor‐neutral term x86 is also common.
• Over the following two decades, the basic 32‐bit architecture remained the same, although successive
generations of CPUs added many new features, including an on‐chip floating point unit, support for large
physical memories through physical address extension (PAE), and vector instructions.
• In 2003, AMD introduced a 64‐bit extension to the x86 architecture, initially dubbed AMD64, and began
shipping 64‐bit Opteron CPUs in 2004.
• Later in 2004, Intel announced its own 64‐bit architectural extension of IA‐32, calling it IA‐32e and later also
EM64T.
• The AMD and Intel 64‐bit extensions are extremely similar, although they differ in some minor ways, one of
which is crucial for virtualization

8
x86 Hardware Virtualization
• Microsoft Virtual Server (2005)
• Came with Microsoft Server 2003
• Did not scale well with 64 bit systems
• Replaced by Hyper-V

• Microsoft Hyper-V (2008 & 2012)


• Hyper-V is short for Hypervisor
• Free release with Server 2008 and 2012
• Best option for Microsoft based virtualization
• Microsoft Hyper-V Server 2016
• Microsoft Hyper-V on Windows Server 2019
Hyper-V Architecture

9
Introduction hyper-converged infrastructure (HCI)

• With VMM 2022, we can manage Azure Stack HCI, 21H2 clusters.

• Azure Stack HCI, version 21H2 is the newly introduced hyper-converged infrastructure (HCI) Operating system
that runs on on-premises clusters with virtualized workloads.

• Most of the operations to manage Azure Stack clusters in VMM are similar to managing Windows Server
clusters.

• Azure Stack HCI is Microsoft’s premier hypervisor offering for running virtual machines on-premises. For
testing and evaluation purposes Azure Stack HCI includes a 60-day free trial and can be downloaded here:
https://azure.microsoft.com/en-us/products/azure-stack/hci/hci-download/

• Microsoft Hyper-V Server 2019 will continue to be supported under its lifecycle policy until January 2029, see
this link for additional information: https://docs.microsoft.com/en-us/lifecycle/products/hyperv-server-2019

10
Virtualization Software
• VMware (Company)
• Releases most popular line of virtualization software
• First company to utilize virtualization on x86 machines
• Software runs on Linux, Windows, and MAC

• vSphere (aka ESX)


• Costly
• High overhead

• VMware Server
• Free
• Not as powerful as ESX or ESXi

11
Introduction to virtualization and resource management in IaaS
The Rise of Resource Overcommitment

• In general terms, it is the allocation of more virtual resources to a machine or a group of


machines than are physically present.

• “Resource overcommitment” is best used to make use of the resources present in a


virtualized cloud infrastructure.

• As most applications will never use all the resources allocated to them at all times.

• The resources provided by a provider will remain idle without “overcommitment.”

12
Resource Management in IaaS
• Resource management is an indispensable way to make use of the underlying hardware of
the cloud effectively.

• A resource manager oversees physical resources allocation to the virtual machines deployed
on a cluster of nodes in the cloud.

• The resource management systems have differing purposes depending upon the
requirements.

• Using physical machines reduces operational costs and can be accomplished through the
overcommitment of resources.

• However, resource overcommitment comes with new challenges such as removal of the
hotspot and the dilemma of where to schedule new incoming VMs to reduce the chances of
the hotspot.
13
Resource Management in IaaS

14
Mitigating the Challenge of Hotspot

• Live migration and memory ballooning of VM help in minimizing hotspots.

• Ballooning can be used if a VM is low on memory to take away some memory from one guest
on the same host, which has some free memory, and provide it to the needy guest.

• But, if none of the guests have enough free memory, then most of the time, the host is
overloaded.

• In that case, a guest has to be migrated from the current host to a different host while
keeping an account of the complete load of the cluster.

15
How does PaaS compare to internally hosted development environments?

• PaaS can be accessed over any internet


connection, making it possible to build an
entire application in a web browser.

• Because the development environment is


not hosted locally, developers can work on
the application from anywhere in the world.

• This enables teams that are spread out


across geographic locations to collaborate.

• It also means developers have less control


over the development environment, though
this comes with far less overhead.
16
What is included in PaaS?
The main offerings included by PaaS vendors are:

• Development tools
• Middleware
• Operating systems
• Database management
• Infrastructure

Management model for PaaS

• Different vendors may include other services as well, but these are the core PaaS services.
17
Development tools in PaaS?
• PaaS vendors offer a variety of tools that are necessary for software development, including a
source code editor, a debugger, a compiler, and other essential tools.
• These tools may be offered together as a framework.
• The specific tools offered will depend on the vendor, but PaaS offerings should include
everything a developer needs to build their application.

Middleware
• Platforms offered as a service usually include middleware, so that developers don't have to
build it themselves.
• Middleware is software that sits in between user-facing applications and the machine's
operating system; for example, middleware is what allows software to access input from the
keyboard and mouse.
• Middleware is necessary for running an application, but end users don't interact with it.

18
Development tools in PaaS?
Operating systems
A PaaS vendor will provide and maintain the operating system that developers work on and the
application runs on.

Databases
• PaaS providers administer and maintain databases.

• They will usually provide developers with a database management system as well.

Infrastructure
• PaaS is the next layer up from IaaS in the cloud computing service model, and everything
included in IaaS is also included in PaaS.

• A PaaS provider either manages servers, storage, and physical data centers, or purchases
them from an IaaS provider.
19
Why do developers use PaaS?
Faster time to market
• PaaS is used to build applications more quickly than would be possible if developers had to
worry about building, configuring, and provisioning their own platforms and backend
infrastructure.

• With PaaS, all they need to do is write the code and test the application, and the vendor
handles the rest.

One environment from start to finish

• PaaS permits developers to build, test, debug, deploy, host, and update their applications all
in the same environment.

• This enables developers to be sure a web application will function properly as hosted before
they release, and it simplifies the application development lifecycle.
20
Price
• PaaS is more cost-effective than leveraging IaaS in many cases.

• Overhead is reduced because PaaS customers don't need to manage and provision virtual
machines.

• In addition, some providers have a pay-as-you-go pricing structure, in which the vendor only
charges for the computing resources used by the application, usually saving customers
money.

• However, each vendor has a slightly different pricing structure, and some platform providers
charge a flat fee per month.

Ease of licensing

PaaS providers handle all licensing for operating systems, development tools, and everything
else included in their platform.
21
What are the potential drawbacks of using PaaS?
Vendor lock-in

• It may become hard to switch PaaS providers, since the application is built using the vendor's
tools and specifically for their platform.

• Each vendor may have different architecture requirements.

• Different vendors may not support the same languages, libraries, APIs, architecture, or
operating system used to build and run the application.

• To switch vendors, developers may need to either rebuild or heavily alter their application.

22
What are the potential drawbacks of using PaaS?
Vendor dependency

• The effort and resources involved in changing PaaS vendors may make companies more
dependent on their current vendor.

• A small change in the vendor's internal processes or infrastructure could have a huge impact
on the performance of an application designed to run efficiently on the old configuration.

• Additionally, if the vendor changes their pricing model, an application may suddenly become
more expensive to operate.

23
What are the potential drawbacks of using PaaS?
Security and compliance challenges

In a PaaS architecture, the external vendor will store most or all of an application's data, along
with hosting its code.

In some cases the vendor may actually store the databases via a further third party, an IaaS
provider.

Though most PaaS vendors are large companies with strong security in place, this makes it
difficult to fully assess and test the security measures protecting the application and its data.

In addition, for companies that have to comply with strict data security regulations, verifying the
compliance of additional external vendors will add more hurdles to going to market.

24
SaaS
• Software-as-a-service (SaaS), also known as cloud application services, is the most
comprehensive form of cloud computing services, delivering an entire application that is
managed by a provider, via a web browser.

• Software updates, bug fixes, and general software maintenance are handled by the provider
and the user connects to the app via a dashboard or API.

• There’s no installation of the software on individual machines and group access to the
program is smoother and more reliable.

• You’re already familiar with a form of SaaS if you have an email account with a web-based
service like Outlook or Gmail, for example, as you can log into your account and get your
email from any computer, anywhere.

25
SaaS
• SaaS is a great option for small businesses who don’t have the staff or bandwidth to handle
software installation and updates, as well as for applications that don’t require much
customization or that will only be used periodically.

• What SaaS saves you in time and maintenance, however, it could cost you in control, security,
and performance, so it’s important to choose a provider you can trust.

• Dropbox, Salesforce, Google Apps, and Red Hat Insights are some examples of SaaS.

26
Cloud Computing

27
Cloud Computing (continued)

28
Hyper scale Infrastructure is the enabler
27 Regions Worldwide, 22 ONLINE…huge capacity around the world…growing every year

North Central US
Illinois
West Europe
United Kingdom
Canada Central Netherlands
Canada East Regions
Central US Toronto
Iowa Quebec City Germany North East
Magdeburg China North *
US Gov Beijing
Iowa
Germany Central Japan East
North Europe China South *
Frankfurt Tokyo, Saitama
Ireland Shanghai
West US East US
California Virginia
India Central Japan West
Pune Osaka
East US 2
South Central US Virginia India South
Texas US Gov Chennai
India West
Virginia
Mumbai East Asia
Hong Kong

SE Asia
Singapore

Australia East
New South Wales

Brazil South
Sao Paulo State Australia South East

◼ 100+ datacenters Victoria

◼ Top 3 networks in the world Operational


◼ 2.5x AWS, 7x Google DC Regions Announced/Not Operational
◼ G Series – Largest VM in World, 32 cores, 448GB Ram, SSD… * Operated by 21Vianet
How are Microsoft Azure Charges Incurred?
• Pay only for what you use*

• VM usage is by the minute

• VMs (IaaS only) that are stopped in Microsoft Azure, only storage charges apply

*Microsoft Azure Enterprise Agreement (EA) billing process differs

30
Microsoft Azure Compute

31
Microsoft Azure App Service
• App Service – fully managed platform in Azure for web, mobile and integration scenarios.
This includes
• Web Apps – Enterprise grade web applications

• API Apps – API apps in Azure App Service are used to develop, publish, manage, and
monetize APIs.

• Mobile Apps - Build native and cross platform apps for iOS, Android, and Windows apps
or cross-platform Xamarin or Cordova (Phonegap) apps

• Logic Apps (preview) - Allows developers to design workflows that articulate intent via a
trigger and series of steps, each invoking an App Service API app
32
Microsoft Azure Cloud Services
• Role – a configuration passed to Azure to tell Azure how many machines of which size and
configuration to build for you
• Web Role – Virtual machine with IIS installed
• Worker Role – Virtual machine without IIS installed
• Ability to mix together multiple role configurations within a single Cloud Service

• Package – Source code binaries are packaged and sent with the configuration file to Azure
• Highly scalable – can exceed number of machines capability of App Service Web Apps
• Allows RDP into individual VMs
• Cloud Services are also used to contain IaaS virtual machines (Classic)

33
High Level view of Virtual Machine Services
• Compute resources
• Virtual Machines
• VM Extensions

• Storage Resources
• Blobs, tables, queues and Files functionality
• Storage accounts (blobs) – Standard & Premium Storage

• Networking Resources
• Virtual networks
• Network interface cards (NICs)
• Load balancers
• IP addresses
• Network Security Groups

34
Management model for PaaS/IaaS

ARM with Resource Providers

35
Introduction to Network Function Virtualization(NFV-VNF)

1. What is NFV?
Overview 2. Why We need NFV?
3. Concepts, Architecture, Requirements

36
Four Innovations of NFV

4. Standard API’s between Modules

3. Implementation in Virtual Machines

2. Network Function Modules

1. Software implementation of network

37
Network Function Virtualization (NFV)
1. Fast standard hardware Software based Devices
Routers, Firewalls, Broadband Remote Access Server (BRAS)
A.k.a. white box implementation

2. Function Modules (Both data plane and control plane)


DHCP (Dynamic Host control Protocol), NAT (Network Address Translation), Rate Limiting,

vBase Stations
Residential Set Top
DNS DHCP CDN
LTE 3G 2G Gateway NAT Box

Hardware
Hardware Hardware

Ref: ETSI, “NFV – Update White Paper V3,” Oct 2014, http://portal.etsi.org/NFV/NFV_White_Paper3.pdf (Must read)

38
NFV (Cont)
3. Virtual Machine implementation
Virtual appliances
All advantages of virtualization (quick provisioning, scalability, mobility, Reduced CapEx,
Reduced OpEx, …)

VM VM VM
Hypervisor

Partitioning

4. Standard APIs: New ISG (Industry Specification Group) in ETSI (European Telecom
Standards Institute) set up in November 2012

39
Why We need NFV?
1. Virtualization: Use network resource without worrying about where it is
physically located, how much it is, how it is organized, etc.
2. Orchestration: Manage thousands of devices
3. Programmable: Should be able to change behavior on the fly.
4. Dynamic Scaling: Should be able to change size, quantity
5. Automation
6. Visibility: Monitor resources, connectivity
7. Performance: Optimize network device utilization
8. Multi-tenancy
9. Service Integration
10.Openness: Full choice of Modular plug-ins
Note: These are exactly the same reasons why we need SDN.

40
VNF
• NFV Infrastructre (NFVI): Hardware and software required to deploy, mange and execute
VNFs

• Network Function (NF): Functional building block with a well defined interfaces and well
defined functional behavior

• Virtualized Network Function (VNF): Software implementation of NF that can be deployed


in a virtualized infrastructure

• Container: VNF is independent of NFVI but needs a container software on NFVI to be able to
run on different hardwares
VNF

Container

NFVI

41
NFV Concepts
• Containers Types: Related to Computation, Networking, Storage

• VNF Components (VNFC): A VNF may have one or more components

• VNF Set: Connectivity between VNFs is not specified, e.g., residential gateways

• VNF Forwarding Graph: Service chain when network connectivity order is important,
e.g., firewall, NAT, load balancer

VNFC 1
V NFC 1
V NFC 1
Load V NFC 1
VNFC 1 VNFC 2 VNFC 3 Balancer VNFC 1

Ref: ETSI, “Architectural Framework,” 2015 , http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_NFV002v010201p.pdf


Ref: ETSI, “NFV Terminology for Main Concepts in NFV,” 2015,
http://www.etsi.org/deliver/etsi_gs/NFV/001_099/003/01.02.01_60/gs_NFV003v010201p.pdf

42
NFV Architecture

NFV Management and Orchestration


Os-Ma
OSS/BSS
Orchestration

Service VNF and Se-Ma


EMS 1 EMS 2 EMS 3 Or-Vnfm
Infrastructure
Description
Ve-Vnfm
VNF 1 VNF 2 VNF 3
VNF Managers
Vn-Nf
Or-Vi
NFVI
Vi-Vnfm
Virtual Computing Virtual Storage Virtual Network
Nf-Vi
Virtualized
Virtualization Layer Infrastructure
Managers
VI-Ha

Computing Hardware Storage Hardware Network Hardware

Execution Reference Points Main NFV Reference Points Other NFV Reference Points

Ref: ETSI, “Architectural Framework,” 2015,


http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_NFV002v010201p.pdf

43
NFV Framework Requirements
1. General: Partial or full Virtualization, Predictable performance

2. Portability: Decoupled from underlying infrastructure

3. Performance: as described and facilities to monitor

4. Elasticity: Scalable to meet SLAs. Movable to other servers.

5. Resiliency: Be able to recreate after failure. Specified packet loss rate, calls drops, time to
recover, etc.

6. Security: Role-based authorization, authentication

44
NFV Framework Requirements (Cont)
7. Service Continuity: Seamless or non-seamless continuity after failures or migration

8. Service Assurance: Time stamp and forward copies of packets for Fault detection

9. Energy Efficiency Requirements: Should be possible to put a subset of VNF in a power


conserving sleep state

10. Transition: Coexistence with Legacy and Interoperability among multi-vendor


implementations

11.Service Models: Operators may use NFV infrastructure operated by other operators

45
Any Function Virtualization (FV)
• Network function virtualization of interest to Network service providers

• But the same concept can be used by any other industry, e.g., financial industry, banks, stock
brokers, retailers, mobile games, …

• Everyone can benefit from:

• Functional decomposition of their industry

• Virtualization of those functions

• Service chaining of those virtual functions (VFs)

• A service provided by the next gen ISPs


46
Enterprise App Market: Lower CapEx

Virtual IP
Multimedia
System

47
Summary
1. NFV aims to reduce OpEx by automation and scalability provided by implementing
network functions as virtual appliances

2. NFV allows all benefits of virtualization and cloud computing including orchestration, scaling,
automation, hardware independence, pay-per-use, fault-tolerance, …

3. NFV and SDN are independent and complementary. You can do either or both.

4. NFV requires standardization of reference points and interfaces to be able to mix and match
VNFs from different sources

5. NFV can be done now.

6. Several of virtual functions have already been demonstrated by carriers.


48
Virtualization in the cloud

What is a Hypervisor? - YouTube

Hypervisors - Georgia Tech - Advanced Operating Systems - YouTube

Ballooning - Georgia Tech - Advanced Operating Systems - YouTube

x86 Virtualization in the Past - YouTube

Full Virtualization - Georgia Tech - Advanced Operating Systems - YouTube

Para Virtualization - Georgia Tech - Advanced Operating Systems - YouTube

49
• Type -1 Hypervisor
• Windows Sandbox – Light weight virtual Machine isolated, temporary virtual environment
• Hyper-V Hyper-V virtual machines, Hypervisor for bare metal virtualisation

• Type -2 Hypervisor
• Oracle Virtual Box
• Ubuntu 22.04 LTS Jammy Jellyfish – With KVM Virtualization Running a Debian Distro
inside it with VMM- for live monitoring - Nested Paging, KVM Paravirtualization.
• Ubuntu 23.04 Lunar Lobster to save the state of the virtual machines -Nested Paging,,
KVM Paravirtualization.
• Storage Virtualisation – Oracle Database 23c VM appliance for data persistence Nested
Paging, PAE/NX, KVM Paravirtualization.

50
Text and References
T1 Mastering Cloud Computing: Foundations and Applications Programming
Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi

R1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)

51
IMP Note to Self

52
Thank You !

53
Cloud Computing
CS - 4

BITS Pilani Faculty Name: Prof. Pradnya Kashikar


pradnyak@wilp.bits-Pilani.ac.in
Today’s session

Contact List of Topic Title Text/Ref Book/


Hour (from content structure in Part A) external resource
7 3. Infrastructure as a Service T1: Ch2
3.1. Introduction to IaaS
3.2. IaaS examples
3.3. Reference Model of AWS
8 3.4. Amazon cloud services - Compute, Database, T1: Ch2
Storage
3.5. Region Vs Availability zones

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


IMP Note to Self

Cloud Computing 3
IMP Note to Students
➢It is important to know that just login to the session does not
guarantee the attendance.
➢Once you join the session, continue till the end to consider you
as present in the class.
➢IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT

4
Introduction to IaaS
Cloud Computing Services Models - IaaS PaaS SaaS Explained - YouTube

 IaaS Infrastructure - computing and storage


resources to be delivered as a Service.
 Under this model, servers and other storage
resources are made available by the service
provider whereas how to harness them is left to
the user.
 While this offers the flexibility to the users, the
burden of maintaining the OS and middleware
falls on them
 Paas and Saas could be used for these !
IaaS Explained - YouTube 5

BITS Pilani
Storage as a Service StaaS
 Storage as a Service is one of the two major
services offered by IaaS .
 It includes
 simple storage service which consists of
highly reliable and available storage.
Example - Amazon S3
 Simple & relational database services.
Example – Amazon SimpleDB and RDS
(Relational Database Service) which is
MySQL instance over a cloud.

BITS Pilani
Data Storage Needs
 Data storage requirement is ever increasing in
the enterprise/industry. Both -
 Structured data like the Relational database
which are vital for e-commerce businesses.
 Unstructured data in various documents like
plans, strategy etc as per the process
require huge storage even in a small company.
 Enterprises may also have to store objects for
their customers e.g. Online photo album
 Need to protect the data - both security and
availability is to be provided as per the demand
in spite of various HW, network and SW failures 7

BITS Pilani
Compute as a service
 This is the second of the two major services
provided by IaaS.
 It makes extensive use of virtualization
technique to provide the computing
resources requested by the user
 Typically one or more Virtual computers
(networked together) are provided to the
user
 These could be increased or decreased as
per the need from time to time
 Sudden increase in traffic can be taken
care of 8

BITS Pilani
IaaS model

StaaS

CaaS Virtual N/W

BITS Pilani
Infrastructure as a Service (IaaS)
Types of IaaS resources

Compute
Cloud compute resources include central processing units
(CPUs), graphical processing units (GPUs), and internal
memory (RAM) that computers require to perform any task.

IaaS users request compute resources in the form of virtual


machines or cloud instances. Cloud services then provision the
required capacity, and you can run your planned tasks in this
virtual environment.
Types of IaaS resources
Storage
IaaS providers offer three types of data storage resources:

1.Block storage stores data in blocks like an SSD or hard drive.

2.File storage stores data as files like in a NAS.

3.Object storage stores data as objects similar to those in


object-oriented programming.
Types of IaaS resources

Networking
IaaS infrastructure also includes networking resources like
routers, switches, and load balancers.

IaaS models work by virtualizing the networking functions of


these appliances in software. For example, AWS Networking to
run secure and high-performing cloud computing networks for
the organization.
Key Benefits of IaaS

✓ Speed
✓ Performance
✓ Reliability
✓ Backup and Recovery
✓ Competitive Pricing
Security and compliance responsibilities shared
under the IaaS model.
IaaS providers take full responsibility for securing the infrastructure they provide
for your cloud applications. They manage security at all levels, such as:

❑ Physical security of the data center premises using measures like security
cameras, guards, and surveillance.

❑ Infrastructure security through restricted access and regular maintenance of


the provider’s infrastructure.

❑ Data security with very strict controls, encryption, and third-party auditing to
meet all compliance requirements.
Key IaaS Services
High Performance Computing (HPC): The platform can execute calculations in
the quadrillions per second, versus a computer with a 3Ghz chip that processes
three billion calculations per second.

Depending on the business needs, HPC services are available to meet the
requirements of scientific research or an engineering firm’s complex
calculations. This is a forced multiplier when it comes to calculations; the faster
calculations are completed, the less an organization pays for required services.
Key IaaS Services
Edge Computing: The platform is colocated near a business, a user, or a data
source for faster and more reliable services. Edge computing offers a business a
hybrid-like solution, with IaaS services colocated with the business or processed
data.

By placing edge computing closer to the using entity, data is processed faster,
offering more flexibility with hardware and software configurations. This also
increases reliability.
Key IaaS Services
Bare metal services: A single-tenant server that is managed by the tenant to
meet their specific needs. The OS is installed directly onto the server for better
performance. Bare metal services are generally used by healthcare providers,
financial institutions, and retail businesses.

Bare metal services can be deployed in a business data center that uses the
service, or a colocation data center. Businesses that use this type of service must
meet stringent requirements for regulatory compliance, privacy, and security.
Key IaaS Services
Resource auto-scaling: An automated process that occurs in IaaS as client
requests or transactions increase or decrease. Resource auto-tracking is a must-
have feature that allows the IaaS-provided services to automatically adjust to a
business’s on-demand needs.

The increase or decrease of IaaS-provided services should also be reflected in


the usage tracking statistics.
Key IaaS Services
Usage tracking: Historical record of used IaaS-provided services shown via a
dashboard, metrics, and reporting. Utilizing the IaaS IT infrastructure as on-
demand services fluctuate is one of the main reasons for selecting an IaaS
provider.

Without the ability to track usage of IaaS-provided services, there is no way to


project the cost of the used services. As a business builds a historical record of
the services used, the business can better approximate the services needed and
take advantage of virtual machine (VM) technology to lower costs.
IaaS Provider Comparison
Major IaaS Providers
Amazon EC2
A useful feature of Amazon Elastic Cloud Compute (EC2) is
Amazon’s pre-defined and pre-configured templates. These
help users build virtual servers using the Amazon Machine
Images (AMIs) to emulate a virtual server as an application
server or operating system to meet a business need.

Amazon provides numerous tools to help create useful AMIs.


IaaS Provider – Amazon EC2 Console Dashboard

Source: CIO Insight (https://www.cioinsight.com/cloud-virtualization/iaas-providers/


Last Seen : 17th Apr, 2023
Major IaaS Providers
Google Compute Engine
Google Compute Engine’s migration tool is a key feature that
several customers state that it is easy to learn and very
effective. Google’s data analytics is another strong feature that
customers like.

Google’s real-time processing tools and Google Cloud


Dataflow are both features that draw customers to use Google
cloud-based solutions.
Major IaaS Providers
IBM Cloud Private
IBM Cloud Private is different from the other products evaluated here.
This IaaS solution is on-premises, or at the business location in a data
center-type environment. The strength of this product is security, with
the ability to still host servers in a cloud environment.

Businesses using a private cloud have full control over their hardware
and software choices, and they have the ability to customize their
hardware or software — unlike when using a public IaaS provider. IBM
uses Kubernetes to extend its cloud applications to public cloud service
providers and automatically manages the RAM, storage, and CPU usage
as necessary.
Features of the Best IaaS Providers
Ultimately, the IaaS provider you select will have differentiating
features that provide your business an optimal IaaS solution for your
specific needs. However, every IaaS provider you consider must offer
these basic services:

•Server clustering and load balancing: The ability to immediately meet


increased customer demands and spread the work load amongst the
clustered servers.

•Dynamic scaling: The ability to automatically adjust resources base on


demand to include real-time cost, as IaaS services scale up or down.
Features of the Best IaaS Providers
•Platform virtualization technology: The ability to create virtualized IT
devices or resources.

•GUI and API-based access: GUI provides a user-friendly interface to


computer information, and allows APIs to communicate with other
computers or programs.

•Monitoring capability: The ability to oversee the operational


performance of IaaS services.
Advantage of IaaS
The biggest advantage of IaaS is :

➢ Allows business to scale up or down due to on-demand


needs. This automatically occurs with resource auto-scaling.

➢ Meeting customer demands based on usage helps to


manage the overall business budget by not buying more
than what is needed, or losing revenue from not having
enough IT network infrastructure to immediately meet an
increase in customer demand.
Disadvantage of IaaS
Disadvantages of IaaS
Businesses that are highly dependent on a public IaaS
provider may lose some control over their information. A
hybrid or private IaaS provider may provide you more
control over your information.

Further, the ability to customize is not as flexible in an IaaS


virtualized server environment, as the provider has total
control of the hardware.
Disadvantage of IaaS
Disadvantages of IaaS
Another downside is that the broadband connectivity at the
work site or at a remote location may be inadequate.
Including an analysis of the minimum bandwidth
requirements in the service-level agreement can reduce any
connectivity issues.

In general, the pricing structure of IaaS may be difficult to


understand, as on-demand services scale up or down.
Disadvantage of IaaS
Disadvantages of IaaS
Most importantly, any agreed-upon SLA that is not meeting
the business needs is problematic if not properly addressed.

A thoroughly written SLA that addresses any disadvantages


can minimize these issues. It’s important to address unique
business needs in the SLA.
Critical Factors for Selecting an IaaS Provider
As you evaluate IaaS providers, consider the type of business
you represent.

❑ If multiple departments make up business, get input from


each department to make sure all issues and concerns are
considered.

❑ Try to identify a key feature that will address all the input
received from each department.

❑ Starting with a comprehensive SLA to address the business


requirements
Amazon Simple Storage Service S3

What is AWS? | Amazon Web Services - YouTube

Amazon Simple Storage Service S3


AWS vs Azure vs GCP | Amazon Web Services vs Microsoft Azure vs Google Cloud Platform | Simplilearn -
YouTube

33

BITS Pilani
Amazon Simple Storage Service S3
• This is highly reliable, scalable, available and
fast storage in the cloud for storing and
retrieving data using simple web services.
• There are three ways of accessing S3
• AWS(Amazon Web Service) console
• REST-ful APIs with HTTP operations like
GET, PUT, DELETE and HEAD
• Libraries and SDKs that abstract these
operations
• There are several S3 browsers available to
access the storage and use it as though it’s
local dir/folder 34

BITS Pilani
Using Amazon S3
Let’s consider that a user wants to back up
(upload) some data for later need.
1. Sign up for S3 at http://aws.amazon.com/s3/
to get AWS access and secret keys similar to
user-id and passwd( Note these keys are for
the complete amazon solution not just S3)
2. Use these credentials to sign in to AWS
Management console
http://console.aws.amazon.com/s3/home
3. Create a bucket giving a name and
geographical location. (Buckets can store
objects/files)
35

BITS Pilani
Using Amazon S3 Contd..
4. Press upload button and follow instructions to
upload the file/object.
5. Now the file is backed up and is available for
use/sharing.

This could also be achieved programmatically if


necessary by including these steps at
appropriate place(s) in the code.

36

BITS Pilani
Buckets, objects and keys
 Files are objects in S3. Objects are referred to
with keys – an optional directory path name
followed by object name. Objects are replicated
across geographical locations in multiple places
to protect against failures but the consistency is
not guaranteed unless versioning is enabled.
 Objects can be up to 5 terabytes in size and a
bucket can have unlimited number of objects.
 Objects have to be stored in buckets which
have unique names and a location (region)
associated with it. There can be 100 buckets per
account
37

BITS Pilani
Accessing objects in S3
 Each object can be accessed by its key via
corresponding URL path of AWS console
http://<bucket name>.S3.amazonaws.com/<key>
Or
http://S3.amazonaws.com/<bucketname>/<key>

• Note that key can be “proj1/file1” but that is just


a label not the dir structure or hierarchy. There
is no hierarchy in S3.
• Also anyone can access the object if it is Public.

38

BITS Pilani
Accessing private objects in S3
 Users can set permissions for others by right
clicking the object in AWS console and granting
anonymous read permissions for example static
read for a web site.
 Alternately they can select object > go to object
menu and click on the “Make Public” option.
 They can give permission to specific users to
read/modify object, by clicking on “properties”
option and then mentioning the email ids of
those who are allowed to access/read/write.

39

BITS Pilani
S3 access security contd..
• User can allow others to add/pick up
objects to/from their buckets . This is specially
useful when clients want some document to get
modified.
• Clients can put the doc/object in a bucket for
modification and after it is modified, collect it
back from the same or another bucket. If the
object/doc is put in the same bucket , then Key
is changed to differentiate modified doc/object
from the earlier one.

40

BITS Pilani
S3 access security contd..
 There is yet another way to ensure security of
S3 objects. User can turn “Logging On” for a
bucket at the time of its creation or do it from
AWS management console.

 This creates detailed access logs which allow


one to see who all accessed, which objects, at
what time, from which IP address and what all
operations were performed.

41

BITS Pilani
Data protection
• One way of ensuring against loss of data is to
create replicas across multiple storage devices
which helps in two replica failures also. This is
the default mechanism.
• User could request RRS – reduced redundancy
storage for non critical data under which only
two replicas are created.
• S3 does not guarantee consistency of data
across replicas. Versioning when enabled can
take care of inadvertent data loss and also
make it possible to revert to previous version.
42

BITS Pilani
Large objects
• S3 objects can be up to 5 terabytes which is
more than the size of an uncompressed 1080p
HD movie.
• In case the need for still larger storage arises,
the user will have to split it into smaller chunks,
store them separately and re-compose at the
application level.
• Uploading large objects does take time in spite
of the large bandwidth. Moreover if a failure
occurs, the whole process has to be repeated.

43

BITS Pilani
Uploading large objects
• To get over this difficulty multi-part upload is
done. This is an elegant solution which not only
splits the object into multiple parts (10000 parts
per object in S3) to upload independently but
also uses the network bandwidth optimally by
parallelizing the uploads. Very efficient solution.
• Since the uploads of the parts are independent
any failure issue in any one part can be rectified
by repeating only that part upload, thereby a
tremendous saving of time!

44

BITS Pilani
Amazon SimpleDB (SDB)
• This is a simple NoSQL data store interface of
key-value pair, which allows storage and
retrieval of attributes based on the key. A simple
alternative to Relational database.
• SDB is organized into domains. Each item in a
domain must have a unique key provided at the
time of creation. It can have up to 256 attributes
in the form of name-value, similar to a row with
primary key in RDBMS. But in SDB an attribute
can be multi-valued and all of them together
stored against the same attribute name.
45

BITS Pilani
SDB admin
• SDB has many features which increase it’s
reliability and availability
• Automatic resource addition proportional to
the request rate
• Automatic indexing of the attributes for quick
retrieval
• Automatic replicating across different
locations(availability)
• Fields can be added to the dataset anytime
since SDB is schema-less; that makes it
scalable
46

BITS Pilani
Amazon Relational DB (RDB)
• RDB is a traditional DB abstraction in the cloud
• MySQL instance
• RDB instance can be created using the tab in the
AWS management console
• AWS console allows the user to manage the RDB
• How often the backup should happen, how long
should the backup data be available etc can be
configured
• Snapshots of DB can be taken from time to time
• Using Amazon APIs user can build a custom tool
to manage the data if needed
47

BITS Pilani
Compute as a service – EC2

Compute as a service – EC2

48

BITS Pilani
Compute as a service – EC2
Amazon Elastic compute Cloud (EC2) is a
unique service provider which allows an
enterprise/ user to have virtual servers with virtual
storage and virtual networking to satisfy the
diverse needs -
• The needs of the enterprise vary among high
storage and/or high end computing at different
times for different applications
• Networking/clustering needs as well as
environment needs also vary depending on the
work context
49

BITS Pilani
Amazon EC2

Just as with S3, EC2 also can be accessed via


Amazon Web services AWS console
• EC2 console dashboard can be used to create
an instance ( compute resource ), check status
and also to terminate the instance
• Clicking on Launch instance will take you to a
list of supported OS images - Amazon Machine
Images(AMI) from which one can choose
• Once you choose OS, the wizard pops up to
take your choice of version, whether you want it
monitored etc
50

BITS Pilani
Amazon EC2 contd ..
• Next a user has to create a key-value pair to
securely connect to the instance once it’s
operative
• Create a key value pair and save to the file in a
safe place. User can reuse the same for multiple
instances
• Now security groups for the instance need to be
set so that certain ports can be kept open or
blocked depending on the context
• When the instance is launched you get the DNS
name of the server which can be used for
remote login as if it were on the same network 51

BITS Pilani
Accessing EC2
• Use key value pair to login AWS console; get
the Windows admin password from the AWS
instance screen to remotely connect to the
instance/ compute resource
• For a linux m/c from the directory where the key
value file is saved give the following command
ssh -i <keyvaluefile>ec2-67-202-62-112.compute-
1.amazonaws.com
follow a few confirmation screens and one is
logged into the compute resource remotel

52

BITS Pilani
Accessing EC2 contd ..

• It’s possible to access EC2 to get compute


resources using command line utilities also for
which you need to
1. Download the zip, unpack, set the
environment variables
2. Set up security environment by getting
x.509 certificate and private key; copy in
appropriate dir
3. Set region for the virtual resources; the list
of regions can be seen and selection made.
Pricing depends upon this selection.
53

BITS Pilani
EC2 computing resources request
• EC2 computing resources are requested in
terms of EC2 Compute Unit (CU) for computing
power, like we use bytes for memory
• One EC2 CU is 1.0-1.2 GHz Xeon processor
• There are some Standard Instances families
with configuration suitable for certain needs
hence recommended by Amazon
• Also available are High memory instances, High
CPU instances, Cluster compute instances for
High performance or Graphic processing

54

BITS Pilani
Configuration of EC2 instance
• After getting the resources of required CU, one
needs to configure OS by selecting from the
available images –
AMI Amazon Machine Images

• In case some other software is needed, it can


be installed on top of the OS image and then
this can be stored as another AMI or alternately
VMware image can be imported as AMI.

55

BITS Pilani
Region and availability

• EC2 also has regions (like S3) which need to be


set( The list of regions is available to select
from)
• There are multiple isolated virtual data centers
called availability zones corresponding to each
region for protection against failures
• One can have the instance placed in two
availability zones of the required region to
ensure availability and tolerance against failure
in any one zone
56

BITS Pilani
Some more Configuration of EC2 instance

 Load balancing and scalability


Elastic load balancer is a service EC2 cloud
offers which distributes the load among the
instances
It can be further configured to route requests from
the same client to the same server by timer and
application controlled sessions
It can sense the failover and spawn a new server if
the load is high for other servers
Load balancer also scales up or down the number
of servers based on the number of requests
(Hence the name Elastic) 57

BITS Pilani
EC2 storage resources
There are two types of block storage available for
EC2 that appear as disk storage
• Elastic block storage(EBS) exists independent
of any instance. The size can be configured and
attached to one or more EC2 instances. It is
data persistent.
• Instance storage is configured for EC2, which
can be attached to one and only one instance.
It’s not persistent, ceases to exist when instance
is terminated. So if you need persistence create
instance storage using S3.
58

BITS Pilani
EC2 Networking resources
• Networking between EC2 instances and also
with outside world via gateways/firewalls will
have to happen.
• EC2 instances therefore need both public and
private addresses.
• Private addresses are used for communication
within EC2 , like intranet; for any communication
between EC2 instances since these addresses
can be resolved quickly using NAT- network
Address Translation.

59

BITS Pilani
EC2 N/W resources contd ..

• Public addresses can be resolved with DNS


server and used for communication with
addresses outside the cloud which is routed via
gateway.

• Similarly inward communication from outside the


cloud can be received by the public address, of
course after passing the firewall and then
gateway which routes it appropriately.

60

BITS Pilani
Elastic IP address

• Elastic IP addresses – These are network


addresses available for an account (up to 5 per
account) which can be dynamically associated
with any instance so that it becomes that
instance’s public address and earlier assigned
public address gets de-assigned.
• These are specially useful in cases of failure of
one EC2 instance. It’s elastic IP address can be
reassigned to another EC2 instance dynamically
so that the requests get routed to the other
instance immediately.
61

BITS Pilani
IaaS Services

• detailed billing;
• monitoring;
• log access;
• security;
• load balancing;
• clustering; and
• storage resiliency, such as backup, replication and recovery.

BITS Pilani
How does IaaS work?
- IaaS customers access resources and services through a WAN, such
as the internet.
- Customers can use the cloud provider's services to install the
remaining elements of an application stack.
- Customers can create virtual machines (VMs), install operating
systems in each VM, deploy middleware such as databases, create
storage buckets for workloads and backups, and install enterprise
workloads into VMs.
- Providers offer services to track costs, monitor performance, balance
network traffic, troubleshoot application issues, and manage disaster
recovery.
- Cloud computing models require the participation of a provider.
- Third-party organizations specialize in selling IaaS, such as AWS and
GCP.
- A business may choose to deploy a private cloud and become its own
provider of infrastructure services.

BITS Pilani
Advantages of IaaS

• Organizations choose IaaS because it is often easier, faster and more cost-efficient to operate
a workload without having to buy, manage and support the underlying infrastructure. With
IaaS, a business can simply rent or lease that infrastructure from another business.

• IaaS is an effective cloud service model for workloads that are temporary, experimental or that
change unexpectedly. For example, if a business is developing a new software product, it
might be more cost-effective to host and test the application using an IaaS provider.

• Once the new software is tested and refined, the business can remove it from the IaaS
environment for a more traditional, in-house deployment. Conversely, the business could
commit that piece of software to a long-term IaaS deployment if the costs of a long-term
commitment are less.

• In general, IaaS customers pay on a per-user basis, typically by the hour, week or month.
Some IaaS providers also charge customers based on the amount of virtual machine space
they use. This pay-as-you-go model eliminates the capital expense of deploying in-house
hardware and software.

• When a business cannot use third-party providers, a private cloud built on premises can still
offer the control and scalability of IaaS -- though the cost benefits no longer apply.

BITS Pilani
Advantages of IaaS

• Eliminates capital expense and reduces ongoing costs.


• Improves business continuity and disaster recovery.
• Innovate rapidly
• Response quicker to shifting business conditions
• Focus on your core business.
• Increase scalability, reliability, and Supportability.
• Better security.
• Get new apps to users fast

BITS Pilani
Disadvantages of IaaS
• Despite its flexible, pay-as-you-go model, IaaS billing can be a
problem for some businesses. Cloud billing is extremely granular,
and it is broken out to reflect the precise usage of services. It is
common for users to experience sticker shock -- or finding costs to be
higher than expected -- when reviewing the bills for every resource
and service involved in application deployment. Users should monitor
their IaaS environments and bills closely to understand how IaaS is
being used and to avoid being charged for unauthorized services.

• Insight is another common problem for IaaS users. Because IaaS


providers own the infrastructure, the details of their infrastructure
configuration and performance are rarely transparent to IaaS users.
This lack of transparency can make systems management and
monitoring more difficult for users.

• IaaS users are also concerned about service resilience. The


workload's availability and performance are highly dependent on the
provider. If an IaaS provider experiences network bottlenecks or any
form of internal or external downtime, the users' workloads will be
affected. In addition, because IaaS is a multi-tenant architecture, the
noisy neighbor issue can negatively impact users' workloads.

BITS Pilani
IaaS User cases

• Testing and development environments. IaaS offers organizations flexibility when it comes to different
test and development environments. They can easily be scaled up or down according to needs.

• Hosting customer-facing websites. this can make it more affordable to host a website, compared to
traditional means of hosting websites.

• Data storage, backup and recovery. IaaS can be the easiest and most efficient way for organizations to
manage data when demand is unpredictable or might steadily increase. Furthermore, organizations can
circumvent the need for extensive efforts focused on the management, legal and compliance
requirements of data storage.

• Web applications. infrastructure needed to host web apps is provided by IaaS. Therefore, if an
organization is hosting a web application, IaaS can provide the necessary storage resources, servers
and networking. Deployments can be made quickly, and the cloud infrastructure can be easily scaled up
or down according to the application's demand.

• High-performance computing (HPC). certain workloads may demand HPC-level computing, such as
scientific computations, financial modeling and product design work.

• Data warehousing and big data analytics. IaaS can provide the necessary compute and processing
power to comb through big data sets.

BITS Pilani
Resource Virtualization

Anything required for the execution of a program is called a resource.


• The processor, memory, displays, mice, keyboards, disk storage, printers, and
networks are all examples of resources.
• The primary functions of an operating system are management of resources
and virtualization of resources.
1. Server Virtualization
2. Storage Virtualization
3. Network Virtualization

BITS Pilani
Server Virtualization

Server virtualization can be defined as the conversion of one physical server into several
individual & isolated virtual spaces that can be taken up by multiple users as per their
respective requirements.

• This virtualization is attained through a software application, thereby screening the actual
numbers and identity of physical servers.
TYPES OF SERVER VIRTUALIZATION
• Complete virtualization,
• Para-virtualization
• Operating System (OS) virtualization.

While all the three modes have one physical server acting as host and the virtual servers as
guests, each of the methods allocates server resources differently to the virtual space.

BITS Pilani
Server Virtualization

• Complete virtualization is done using the hypervisor software that directly


uses the physical server's CPU and hard disk storage space.
However, the guests can use their respective versions and types of OS, as
the hypervisor keeps the virtual servers separate and independent of each
other.
• Para-virtualization, the guests are aware about all the existing virtual
servers, and work cohesively as a unit. The hypervisor in this case keeps
their OS independent, while making them aware of the load put on the
physical server by all the virtual creations
• OS-level virtualization no hypervisor is required and the host's OS is the
controller
• It usage of the same OS on all the guest users' systems.
But this homogenous environment still maintains the individual identity
and independence of virtual servers.
BITS Pilani
Significance Of Server Virtualization
• Server virtualization leads to space consolidation, efficient & effective usage of
server resources & capabilities.

• Moreover, the redundancy practice of running one application on multiple


systems is a boon for commercial sector and software programmers.

• Also, the assistance offered in disaster recovery, server administration, and


system upgrading are all supporting factors in server virtualization.

BITS Pilani
Storage Virtualization
• Storage virtualization is the pooling of physical storage
from multiple network storage devices into what appears
to be a single storage device that is managed from a
central console.
Or
Storage virtualization is the process of grouping the
physical storage from multiple network storage devices so
that it looks like a single storage device.
• Storage virtualization is also known as cloud storage.
• Storage virtualization helps the storage administrator
perform the tasks of backup, archiving and recovery more
easily and in less time by disguising the actual complexity
of a storage area network (SAN).
Storage virtualization can be implemented by using
software applications or appliances.
BITS Pilani
Storage Virtualization
There are three important reasons to implement storage virtualization:
1. Improved storage management in a heterogeneous IT environment
2. Better availability and estimation of down time with automated
management
3. Better storage utilization
Storage virtualization can be applied to any level of a SAN. The
virtualization techniques can also be applied to different storage
functions such as physical storage, RAID groups, logical unit
numbers (LUNs), LUN subdivisions, storage zones and logical
volumes, etc.
The storage virtualization model can be divided into four main layers:
1. Storage devices
2. Block aggregation layer
3. File/record layer
4. Application layer
Some of the benefits of storage virtualization include automated
management, expansion of storage capacity, reduced time in manual
supervision, easy updates and reduced downtime.
BITS Pilani
Network Virtualization
• Network virtualization refers to the management and monitoring of an entire
computer network as a single administrative entity from a single software-
based administrator's console.
• Network virtualization also may include storage virtualization, which involves
managing all storage as a single resource.
• Network virtualization is designed to allow network optimization of data
transfer rates, flexibility, scalability, reliability and security.
• It automates many network administrative tasks, which actually disguise a
network's true complexity.
• All network servers and services are considered one pool of resources, which
may be used without regard to the physical components.
• Network virtualization is especially useful for networks experiencing a rapid,
large and unpredictable increase in usage.
• The intended result of network virtualization is improved network productivity
and efficiency, as well as job satisfaction for the network administrator.
BITS Pilani
Network Virtualization
• Network virtualization is accomplished by using a variety of hardware and
software and combining network components.
Network Virtualization Gives you
• optimize network
• speed
• reliability
• flexibility
• scalability
• security

BITS Pilani
Virtual Machine
Resources Provision and Manageability
• Most business applications run in a mix of physical, virtual and cloud IT
environments.
• Virtual environments are very dynamic by their nature.
• Virtualization solutions dynamically allocate IT resources to applications, .
perform load balancing based on resource utilization levels as well as perform
dynamic power management to cut down power costs.
• IT administrators need to ensure that sufficient server power is available to
support these dynamic environments.
• However, this process can be time consuming and error prone if done
manually.

BITS Pilani
Virtual Machine

BITS Pilani
Virtual Machine
• Storage as Service
• Storage as a Service is a business model in which a large company rents
space in their storage infrastructure to a smaller company or individual.
• Storage as a Service is generally seen as a good alternative for a small or
mid-sized business that lacks the capital budget and/or technical personnel to
implement and maintain their own storage infrastructure.g and error prone if
done manually.

BITS Pilani
Virtual Machine
• Data Storage in Cloud Computing

• In which the digital data is stored in logical pools, the physical storage spans
multiple servers (and often locations), and the physical environment is
typically owned and managed by a hosting company.

• • These cloud storage providers are responsible for keeping the data available
and accessible, and the physical environment protected and running.

• People and organizations buy or lease storage capacity from the providers to
store user, organization or application data.

BITS Pilani
References
CIO Insight (https://www.cioinsight.com/cloud-
virtualization/iaas-providers/ Last Seen : 24th Apr, 2023

https://aws.amazon.com/what-is/iaas/
Text and References
T1 Mastering Cloud Computing: Foundations and Applications Programming
Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi

R1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)

81
IMP Note to Self

82
Thank You !

83
Cloud Computing
CS - 5

BITS Pilani Faculty Name: Prof. Pradnya Kashikar


pradnyak@wilp.bits-Pilani.ac.in
Today’s session

Contact List of Topic Title Text/Ref Book/ external


Hour (from content structure in Part A) resource
9 3.6. Case Study - Openstack http://www.slashroot.in/
OR openstack-tutorial-getting-
Amazon Cloud Services - EC2, S3, SimpleDB, RDS started-basics-building-your-
own-cloud
10 3.6. Case Study - Openstack http://docs.openstack.org/
OR
Amazon Cloud Services - CloudFront, Elastic Load
Balancer, Elastic Block Storage

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


IMP Note to Self

Cloud Computing 3
IMP Note to Students
➢It is important to know that just login to the session does not
guarantee the attendance.
➢Once you join the session, continue till the end to consider you
as present in the class.
➢IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT

4
• Amazon Web Services (AWS) is a collection of remote
computing services, also called web services, that make up
a cloud computing platform offered by Amazon.com.
• These services are based out of 11 geographical regions
across the world.

• The most central and well-known of these services are


Amazon EC2 and Amazon S3.

• These products are marketed to large and small companies


as a service to provide large computing capacity much
faster and cheaper than the client company building an
actual physical server farm.
Compute
• Amazon Elastic Compute Cloud (EC2) provides scalable virtual private servers using
Xen.
• Amazon Elastic MapReduce (EMR) allows businesses, researchers, data analysts, and
developers to easily and cheaply process vast amounts of data. It uses a hosted
Hadoop framework running on the web-scale infrastructure of EC2 and Amazon S3.
Networking
• Amazon Route 53 provides a highly available and scalable Domain Name System (DNS) web
service.
• Amazon Virtual Private Cloud (VPC) creates a logically isolated set of Amazon EC2 instances which
can be connected to an existing network using a VPN connection.
• AWS Direct Connect provides dedicated network connections into AWS data centers, providing
faster and cheaper data throughput.
• Content delivery
– Amazon CloudFront, a content delivery network (CDN) for distributing objects to so-called
"edge locations" near the requester
• Storage and content delivery
– Amazon Simple Storage Service (S3) provides Web Service basedstorage.
– Amazon Glacier provides a low-cost, long-term storage option (compared to S3).
– AWS Storage Gateway, an iSCSI block storage virtual appliance with cloud-based backup.
– Amazon Elastic Block Store (EBS) provides persistent block-level storage volumes for EC2.
– AWS Import/Export, accelerates moving large amounts of data into and out of AWS using
portable storage devices for transport.
Database
– Amazon DynamoDB provides a scalable, low-latency NoSQL online Database Service backed by SSDs.
– Amazon ElastiCache provides in-memory caching for web applications. This is Amazon's implementation of Memcached and Redis.
– Amazon Relational Database Service (RDS) provides a scalable database server with MySQL, Oracle, SQL Server, and PostgreSQL
support. [22]
– Amazon Redshift provides petabyte-scale data warehousing with column-based storage and multi-node compute.
– Amazon SimpleDB allows developers to run queries on structured data. It operates in concert with EC2 and S3 to provide "the core
functionality of a database".
– AWS Data Pipeline provides reliable service for data transfer between different AWS compute and storage services
– Amazon Kinesis streams data in real time with the ability to process thousands of data streams on a per- second basis.
Deployment
• Amazon CloudFormation provides a file-based interface for provisioning other AWS resources.
• AWS Elastic Beanstalk provides quick deployment and management of applications in the cloud.
• AWS OpsWorks provides configuration of EC2 services using Chef.
Management
• Amazon Identity and Access Management (IAM) is an implicit service, the authentication infrastructure used to
authenticate access to the various services.
• Amazon CloudWatch, provides monitoring for AWS cloud resources and applications, starting with EC2.
• AWS Management Console (AWS Console), A web-based point and click interface to manage and monitor the Amazon
infrastructure suite including (but not limited to) EC2, EBS, S3, SQS, Amazon Elastic MapReduce, and Amazon CloudFront.
Amazon also makes available a mobile application for Android which has support for some of the management features from
the console.
AWS
An AWS Cloud architecture
for web hosting

10
AWS Cloud architecture for web DNS services with Amazon Route 53 – Provides DNS services to simplify domain management.

Edge caching with Amazon CloudFront – Edge caches high-volume content to decrease the latency to customers.

hosting Edge security for Amazon CloudFront with AWS WAF – Filters malicious traffic, including cross site scripting (XSS) and SQL injection via customer-defined rules.

Load balancing with Elastic Load Balancing (ELB) – Enables you to spread load across multiple Availability Zones and AWS Auto Scaling groups for redundancy and decoupling of services.

DDoS protection with AWS Shield – Safeguards your infrastructure against the most common network and transport layer DDoS attacks automatically.

Firewalls with security groups – Moves security to the instance to provide a stateful, host-level firewall for both web and application servers.

Caching with Amazon ElastiCache – Provides caching services with Redis or Memcached to remove load from the app and database, and lower latency for frequent requests.

BITS Pilani Managed database with Amazon Relational Database Service (Amazon RDS) – Creates a highly available, multi-AZ database architecture with six possible DB engines.

Static storage and backups with Amazon Simple Storage Service (Amazon S3) – Enables simple HTTP-based object storage for backups and static assets like images and video.

11
Web Application Hosting

12
Web Application Hosting
1. Amazon Web Services (AWS) provides services and infrastructure for
building reliable, fault-tolerant, and highly available web applications in the
cloud.
2. Web applications in production environments generate significant amounts
of log information.
3. Analyzing logs can provide valuable insights into traffic patterns, user
behavior, and marketing profiles.
4. As web applications grow and visitor numbers increase, storing and
analyzing web logs becomes more challenging.
5. The diagram illustrates how AWS can be used to construct a scalable and
dependable large-scale log analytics platform.
6. The core component of this architecture is Amazon Elastic MapReduce,
which is a web service enabling analysts to process vast amounts of data
easily and cost-effectively using a Hadoop hosted framework. 13
Web Application Hosting
1. The web front-end servers run on Amazon Elastic Compute Cloud (Amazon EC2) instances.
2. Amazon CloudFront, a content delivery network, distributes static files to customers with low latency and
high data transfer speeds, generating valuable log information.
3. Log files are regularly uploaded to Amazon Simple Storage Service (Amazon S3), a highly available and
reliable data store. The data is sent in parallel from multiple web servers or edge locations.
4. The data set is processed by an Amazon Elastic MapReduce cluster. Amazon Elastic MapReduce utilizes a
hosted Hadoop framework to process the data in a parallel job flow.
5. Amazon EC2 offers unused capacity at a reduced cost known as the Spot Price. This price fluctuates based
on availability and demand. Utilizing Spot Instances can dynamically extend the cluster's capacity and
significantly reduce the cost of running job flows, especially for flexible workloads.
6. The results of data processing are pushed back to a relational database using tools like Apache Hive. This
database can be an Amazon Relational Database Service (Amazon RDS) instance, which simplifies the
setup, operation, and scalability of relational databases in the cloud.
7. Amazon RDS instances are priced on a pay-as-you-go model, just like many other services. After analysis,
the database can be backed up into an Amazon S3 database snapshot and then terminated. Whenever
needed, the database can be recreated from the snapshot.

14
Web Log Analysis 1. Amazon Web Services (AWS) offers services and infrastructure for building reliable, fault-tolerant, and highly available web applications in the cloud.

2. In production environments, web applications generate substantial amounts of log data, which can provide valuable insights.

3. Analyzing logs can reveal valuable information such as traffic patterns, user behavior, and marketing profiles.

4. However, as web applications grow and visitor numbers increase, storing and analyzing web logs becomes more challenging.

BITS Pilani 5. This diagram illustrates how Amazon Web Services can be utilized to construct a scalable and dependable large-scale log analytics platform.

6. At the core of this architecture is Amazon Elastic MapReduce, a web service that empowers analysts to easily and cost-effectively process large volumes of data using a hosted Hadoop framework.

16
1. The web front-end servers run on Amazon Elastic Compute Cloud (Amazon EC2) instances.

Web Log Analysis 2.

3.
Amazon CloudFront, a content delivery network, utilizes low latency and high data transfer speeds to distribute static files to customers. Additionally, this service generates valuable log information.

Periodically, log files are uploaded to Amazon Simple Storage Service (Amazon S3), a highly available and reliable data store. Data is sent in parallel from multiple web servers or edge locations.

4. The data set is processed by an Amazon Elastic MapReduce cluster, which leverages a hosted Hadoop framework for parallel job flow processing.

5. Amazon EC2 offers instances at a reduced cost known as the Spot Price when there is unused capacity. This price fluctuates based on availability and demand. If your workload is flexible in terms of completion time or required capacity, utilizing Spot Instances allows dynamic capacity

extension for your cluster, resulting in significant cost reduction for running job flows.

BITS Pilani 6. Amazon RDS instances, like many other services, follow a pay-as-you-go pricing model. After analysis, the database can be backed up into an Amazon S3 database snapshot and subsequently terminated. The database can then be recreated from the snapshot whenever necessary.

7. Data processing results are pushed back to a relational database using tools such as Apache Hive. The database can be an Amazon Relational Database Service (Amazon RDS) instance, which simplifies the setup, operation, and scalability of relational databases in the cloud.

17
FAULT TOLERANCE & HIGH
AVAILABILITY
Amazon S3

• Amazon S3 (Simple Storage Service) is an online


file storage web service offered by Amazon Web
Services.
• Amazon S3 provides storage through web
services interfaces (REST, SOAP, and Bit Torrent).

• Objects can be up to 5 terabytes in size.


– each accompanied by up to 2 kilobytes of metadata
Amazon S3

29
Use cases
1. Build a data lake-Run big data analytics, artificial intelligence (AI), machine learning (ML), and high performance computing (HPC) applications to unlock data insights.

2. Back up and restore critical data-Meet Recovery Time Objectives (RTO), Recovery Point Objectives (RPO), and compliance requirements with S3’s robust replication features.

BITS Pilani 3. Run cloud-native applications-Build fast, powerful mobile and web-based cloud-native apps that scale automatically in a highly available configuration.

4. Archive data at the lowest cost-Move data archives to the Amazon S3 Glacier storage classes to lower costs, eliminate operational complexities, and gain new insights.

30
Amazon S3 buckets
• Amazon S3 stores data as objects within resources called "buckets."
• You can store as many objects as you want within a bucket, and
write, read, and delete objects in your bucket.

• each bucket is identified by a unique, user-assigned key.


• Amazon Machine Images (AMIs) which are used in the Elastic
Compute Cloud (EC2) can be exported to S3 as bundles.

• Buckets and objects can be created, listed, and retrieved using


either a REST-style HTTP interface or a SOAP interface.

• Additionally, objects can be downloaded using the HTTP GET


interface and the Bit Torrent protocol.
Major users of S3
• SmugMug
• Dropbox
• Minecraft
• Tumblr, Formspring,
• Nasdaq
Amazon Glacier
• Amazon Glacier is an online file storage web service that provides
storage for data archiving and backup.[1]
• Glacier is designed for long-term storage of data that is infrequently
accessed and for which retrieval latency times of 3 to 5 hours are
acceptable.
• Storage costs are a consistent $0.01 per gigabyte per month, which
is substantially cheaper than Amazon's own Simple Storage Service
(S3) service.
• Data is stored in Amazon Glacier in "archives." An archive can be
any data such as a photo, video, or document.
• You can upload a single file as an archive or aggregate multiple files
into a TAR or ZIP file and upload as one archive.
• A single archive can be as large as 40 terabytes. You can store an
unlimited number of archives and an unlimited amount of data in
Amazon Glacier.
Key features
• Vault Inventory
• Access Control
• Data Retrieval Policies
• Audit Logs
• Integrated lifecycle management with Amazon S3
• AWS Software Development Kits (SDKs)
• Transferring large amounts of data
• Encryption by default - AES 256-bit , SSL
• Immutable archives – cannot be altered after upload
• Flexible access control with IAM policies
Use cases (real time usage)
• Snap Optimizes Cost Savings While Storing Over 1.5 Trillion Photos
and Videos on Amazon S3 Glacier Instant Retrieval
• Pinterest uses Amazon S3 Glacier Deep Archive to manage storage for
its visual discovery engine
• Nasdaq Uses AWS to Pioneer Stock Exchange Data Storage in the
Cloud
• Qube Cinema Takes Movies from Stage to Screen Faster on AWS

35
Amazon EBS
• Amazon Elastic Block Store (Amazon EBS) provides
persistent block level storage volumes for use with
Amazon EC2 instances in the AWS Cloud.

• Each Amazon EBS volume is automatically replicated


within its Availability Zone to protect you from
component failure, offering high availability and
durability.

• Amazon EBS volumes offer the consistent and low-


latency performance needed to run your workloads.
Amazon Elastic Block Store (Amazon EBS)

• Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-
storage service designed for Amazon Elastic Compute Cloud (Amazon EC2).

37
Amazon EBS- features
• Once attached, you can create a file system on top of these volumes, run a
database, or use them in any other way you would use a block device.
• Amazon EBS volumes are placed in a specific Availability Zone, where they
are automatically replicated to protect you from the failure of a single
component.
• Amazon EBS provides three volume types:
– General Purpose (SSD),
– Provisioned IOPS (SSD), and
– Magnetic.
• The three volume types differ in performance characteristics and cost, so
you can choose the right storage performance and price for the needs of
your applications.

• All EBS volume types offer the same durable snapshot capabilities and are
designed for 99.999% availability.
Use cases
• Build your SAN in the cloud for I/O intensive applications-Migrate mid-
range, on-premises storage area network (SAN) workloads to the cloud.
Attach high-performance and high-availability block storage for mission-
critical applications.

• Run relational or NoSQL databases-Deploy and scale your choice of


databases, including SAP HANA, Oracle, Microsoft SQL Server, PostgreSQL,
MySQL, Cassandra, and MongoDB.

• Right-size your big data analytics engines-Easily resize clusters for big data
analytics engines, such as Hadoop and Spark, and freely detach and
reattach volumes.

39
AWS Import/Export
• AWS Import/Export accelerates moving large
amounts of data into and out of the
AWS cloud using portable storage devices for
transport.
• AWS Import/Export transfers your data directly
onto and off of storage devices using Amazon’s
high-speed internal network and bypassing the
Internet.
• For significant data sets, AWS Import/Export is
often faster than Internet transfer and more cost
effective than upgrading your connectivity.
AWS Import/Export Contd…

• AWS Import/Export supports data transfer


into and out ofAmazon S3 buckets.
• The service also supports importing
data into Amazon EBS snapshots.
• In addition, AWS Import/Export supports
transferring data into Amazon Glacier
Common Use Cases for AWS Import/Export

• Data Cloud Migration


• If you have data you need to migrate into the AWS cloud for the first time,
AWS Import/Export is often much faster than transferring that data via the
Internet.
• Content Distribution
• Send data to your customers on portable storage devices.
• Direct Data Interchange
• If you regularly receive content on portable storage devices from your business
associates, you can have them send it directly to AWS for import into Amazon
S3 or Amazon EBS or Amazon Glacier.
• Offsite Backup
• Send full or incremental backups to Amazon S3 and Amazon Glacier for
reliable and redundant offsite storage
• Disaster Recovery
• In the event you need to quickly retrieve a large backup stored in Amazon S3
or Amazon Glacier, use AWS Import/Export to transfer the data to a portable
storage device and deliver it to your site.
Getting Started

• To use AWS Import/Export you simply:


• Prepare a portable storage device
• Submit a Create Job request to AWS.
• You’ll receive back a unique identifier for the job, a digital signature.
• Securely identify and authenticate your device.
• Ship your device along with its interface connectors, and power
supply to AWS.
• When your package arrives, it will be processed and securely
transferred to an AWS data center, where your device will be
attached to an AWS Import/Export station.

• After the data load completes, the device will be returned to you.
When to Use AWS Import/Export
Available Theoretical Min. Number of
When to ConsiderAWS
Internet Days to Transfer 1TB at 80%
Import/Export?
Connection Network Utilization
T1
82 days 100GB or more
(1.544Mbps)
10Mbps 13 days 600GB or more
T3
3 days 2TB or more
(44.736Mbps)
100Mbps 1 to 2 days 5TB or more
1000Mbps Less than 1 day 60TB or more

For example, at 10Mbps connection and expect to utilize 80% of your network capacity for the data
transfer, transferring 1TB of data over the Internet to AWS will take 13 days.

The volume at which this same set-up will take at least a week, is 600GB, so if you have 600GB of
data or more to transfer, and you want it to take less than a week to get into AWS, we recommend
you using AWS Import/Export.
Amazon CloudFront
• Amazon CloudFront is a content delivery web service.
• Gives developers and businesses an easy way to distribute content to end
users with
– low latency,
– high data transfer speeds, and
– no minimum usage commitments.

• Amazon CloudFront can be used to deliver your entire website, including


dynamic, static, streaming, and interactive content using a global network
of edge locations.

• Requests for your content are automatically routed to the nearest edge
location, so content is delivered with the best possible performance.

• Users: IMDB, NASA, SEGA


AWS Database Services
• Amazon RDS gives you online access to the
capabilities of a
– MySQL,
– Oracle,
– Microsoft SQL Server,
– PostgreSQL, or
– Amazon Aurora relational database management
system.
• This means that the code, applications, and tools
you already use today with your existing
databases can be used with Amazon RDS.
AWS Database Services Contd….
• Database Instances using the Amazon Aurora engine
employ a fault-tolerant, self-healing SSD-backed
virtualized storage layer purpose-built for database
workloads.
• In addition, Amazon RDS makes it easy to use
replication to enhance availability and reliability for
production workloads. Using the Multi-AZ deployment
option you can run mission critical workloads with high
availability and built-in automated fail-over from your
primary database to a synchronously replicated
secondary database in case of a failure.
Amazon DynamoDB
• Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent,
single-digit millisecond latency at any scale.
• It is a fully managed database and supports both document and key-value data models.

• Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and
many other applications.

• DynamoDB supports storing, querying, and updating documents. Using the AWS SDK you can write
applications that store JSON documents directly into Amazon DynamoDB tables.

• Schema-less
• Amazon DynamoDB has a flexible database schema.
• The data items in a table need not have the same attributes or even the same number of
attributes.
• Multiple data types (strings, numbers, binary data, and sets) add richness to the data model.

• Key-value Data Model Support

• Strong Consistency, Atomic Counters

• Elastic MapReduce Integration


AWS Compute and Network
• AWS C&N provides….
– provision of virtual servers,
– set up a firewall,
– configure Internet access,
– allocate and route IP addresses, and
– scale your infrastructure to meet increasing demand.
• You can use the compute and networking services with the
– storage,
– database, and
– application services to provide a complete solution for
computing,
– query processing, and
– storage across for a wide range of applications.
Amazon Route 53
• Amazon Route 53 is a highly available and scalable
cloud Domain Name System (DNS) web service.
• Amazon Route 53 makes it possible for you to manage
traffic globally through a variety of routing types,
including
– Latency Based Routing,
– Geo DNS, and
– Weighted Round Robin—
• all of which can be combined with DNS Failover in
order to enable a variety of low-latency, fault-tolerant
architectures.
Amazon VPC
• Amazon Virtual Private Cloud (Amazon VPC) lets you
– provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS
resources in a virtual network that you define.

• You can easily customize the network configuration for your Amazon Virtual Private Cloud.
– For example, you can create a public-facing subnet for your webservers that has access to the Internet,
and place your backend systems such as databases or application servers in a private-facing subnet with
no Internet access.

• Additionally, you can create a Hardware Virtual Private Network (VPN) connection between your
corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate
datacenter.

• organizations can use or reuse AWS-compatible tools, images, and scripts to manage their own on-
premise infrastructure as a service (IaaS) environments.

• The AWS API is implemented on top of Eucalyptus, so tools in the cloud ecosystem that can communicate
with AWS can use the same API with Eucalyptus.
AWS-compatible tools.
Autoscaling -. With auto-scaling, developers can add instances and virtual
machines as traffic demands increase.

Auto-scaling policies for Eucalyptus are defined using Amazon EC2-compatible


APIs and tools.

Elastic Load Balancing - A service that distributes incoming application traffic and
service calls across multiple Eucalyptus workload instances, providing greater
application fault tolerance.

CloudWatch - A monitoring tool similar to Amazon CloudWatch that monitors


resources and applications on Eucalyptus clouds.

Using CloudWatch, application developers and cloud administrators can program


the collection of metrics, set alarms and identify trends that may be endangering
workload operations, and take action to ensure their applications continue to run
smoothly.
Regions and Availability zones
▪ AWS Cloud computing resources are housed in highly available data center facilities.

▪ To provide additional scalability and reliability, these data center facilities are located in different
physical locations.

▪ These locations are categorized by Regions and Availability Zones.

▪ AWS Regions are large and widely dispersed into separate geographic locations.

▪ Availability Zones are distinct locations within an AWS Region that are engineered to be
isolated from failures in other Availability Zones.

▪ They provide inexpensive, low-latency network connectivity to other Availability Zones in the
same AWS Region.
Regions and Availability zones

To create or work with a cluster in a specific region, corresponding


regional service endpoint is used. For service endpoints, see Supported
regions & endpoints.
Regions and Availability zones

To create or work with a cluster in a specific region, corresponding


regional service endpoint is used. For service endpoints, see Supported
regions & endpoints.
Availability Zone considerations
➢ Distributing Memcached nodes over multiple Availability Zones within
a region helps protect from the impact of a catastrophic failure, such
as a power loss within an Availability Zone.

➢ A Memcached cluster can have up to 300 nodes. When you create or


add nodes to your Memcached cluster, you can specify a single
Availability Zone for all your nodes, allow ElastiCache to choose a
single Availability Zone for all your nodes, specify the Availability
Zones for each node, or allow ElastiCache to choose an Availability
Zone for each node.

➢ New nodes can be created in different Availability Zones as you add


them to an existing Memcached cluster. Once a cache node is created,
its Availability Zone cannot be modified.
Availability Zone considerations Contd…

➢ If you want a cluster in a single Availability Zone cluster to have its


nodes distributed across multiple Availability Zones, ElastiCache can
create new nodes in the various Availability Zones.

➢ You can then delete some or all of the original cache nodes. This
approach is recommend.

*Note: What is Amazon ElastiCache for Memcached? Set up, manage, and scale a
distributed in-memory data store or cache environment in the cloud using the cost-
effective ElastiCache solutions.
Supported regions & endpoints
❖ Amazon ElastiCache is available in multiple AWS Regions. This means that you can launch ElastiCache clusters
in locations that meet your requirements. For example, you can launch in the AWS Region closest to your
customers, or launch in a particular AWS Region to meet certain legal requirements.

❖ By default, the AWS SDKs, AWS CLI, ElastiCache API, and ElastiCache console reference the US-West (Oregon)
region. As ElastiCache expands availability to new regions, new endpoints for these regions are also available
to use in your HTTP requests, the AWS SDKs, AWS CLI, and the console.

❖ Each Region is designed to be completely isolated from the other Regions. Within each Region are multiple
Availability Zones (AZ). By launching your nodes in different AZs you are able to achieve the greatest possible
fault tolerance. For more information on Regions and Availability Zones, see Choosing regions and availability
zones at the top of this topic.
AWS In 10 Minutes | AWS Tutorial For Beginners | AWS Training Video | AWS Tutorial | Simplilearn - YouTube

Introduction to AWS Services - YouTube


OpenStack overview
What is OpenStack | OpenStack Explained | OpenStack | Intellipaat - YouTube

What is OpenStack - YouTube

An Introduction to Openstack - YouTube

BITS Pilani
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

What is OpenStack

Programmable infrastructure that lays a One platform for virtual machines, containers
common set of APIs on top of compute, and bare metal
networking and storage
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

OpenStack Cloud Models


THERE’S A GLOBAL SHIFT TOWARD CLOUD. THE BENEFITS: AGILITY, SCALABILITY, DECREASED HARDWARE COSTS.

Public cloud: shared resource, Private Cloud: dedicated to a Hybrid cloud: a mix of private
“pay-as-you-go” models are single user. Can be hosted private cloud and public cloud
common. OpenStack public cloud cloud in a vendor’s data center or orchestrated together to meet
is available in 60+ datacenters yours, or remotely managed company needs
globally. private cloud.

3 CLOUD MODELS
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

OpenStack is open source


HERE’S WHY THAT MATTERS

OPENSTACK PRINCIPLES

Choice & control: ability Ability to contribute or


1 OPEN SOURCE
to choose between and directly influence the
switch vendors roadmap
2 OPEN DESIGN

3 OPEN DEVELOPMENT

Widely adopted open Part of a vibrant community


source APIs are the new to share knowledge and help 4 OPEN COMMUNITY
standards each other
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

Primary business drivers


#1 avoid vendor lock-in
#2 accelerate innovation
#3 operational efficiency
Source: User Survey, April 2017
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

Which industries choose OpenStack?


RETAIL/E-COMMERCE FINANCIAL TELECOM ACADEMIC/RESEARCH

ENERGY AND MANUFACTURING INSURANCE ENTERTAINMENT

See more at
openstack.org/user-stories
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

What runs on OpenStack?


TELECOM/NFV HPC ENTERPRISE APPS BIG DATA

86% of telecoms say CERN runs one of the largest Comcast powers customer- Banco Santander runs 1,000
OpenStack is important to OpenStack clouds to process facing and internal compute nodes of OpenStack
their business; many are using data from the Large Hadron applications and services for in data centers across the
OpenStack to virtualize their Collider, giving physicists the both production and world, and uses Cloudera on
networks and implement edge resources they need to development environments OpenStack to power fraud
computing to achieve agility unleash the secrets of the with OpenStack. detection.
significant cost savings. universe.

MULTI-CLOUD E-COMMERCE DEVELOPER PRODUCTIVITY WEB SERVICES

DigitalFilm Tree uses Walmart moved their global e- Adobe Digital Marketing uses Workday moved their on-demand
interoperable OpenStack commerce platform to OpenStack to convert their software services from static,
private and public clouds to OpenStack, powering existing virtualization virtualized environments to a fully
process thousands of hours of desktop, mobile, tablet and environment into self-service elastic and scalable platform
raw footage into a one-hour kiosk users. IT. based on OpenStack.
TV show.
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

About the OpenStack Foundation


Maintain infrastructure for development & communication

Coordinate software releases

Trademark and legal management

Host summits & development meetings

Promote the use of open source infrastructure projects

openstack.org/foundation
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

The OpenStack Community

81,000+
MEMBERS
187
COUNTRIES
670+
ORGANIZATIONS
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

OpenStack Foundation Sponsors


PLATINUM MEMBERS

GOLD MEMBERS
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

Cross-community collaboration
OpenStack integrates with a number of other technologies, including many popular open source projects, enabling users to combine them with
OpenStack.

Containers PaaS NFV Provisioning


WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

OpenStack’s software releases


Yoga Zed Antelope Bobcat

March 2021 October 2022 March 2023 October 2023

Releases happen In development


every 6 months

Most clouds run one of the two most recent releases


Learn more about the releases at openstack.org/software
https://releases.openstack.org/
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

The OpenStack Framework

WHAT GETS CALLED


OPENSTACK?

USING THE SAMPLE


CONFIGURATIONS

CORE SERVICES &


OPTIONAL SERVICES

https://www.openstack.org/software/project-
navigator/openstack-components#openstack-services
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS

It costs less and does more


“In all private cloud-based applications…we expect “TD Bank...experienced a 25% to 40% costs savings on
approximately 70% of cost savings as compared to their platforms and virtual machines over their previous
classical IT solutions.” solution by deploying OpenStack.”

–Holger Urban, Volkswagen


–Forbes, “3 Reasons Why An
OpenStack Private Cloud May Cost
You Less Than Amazon Web
Services”
Watch this session:
Elephant in the Room: What's the
TCO for an OpenStack Cloud?
Openstack Components

BITS Pilani
Glance – Image Store
It provides discovery, registration and delivery services for disk
and server images.
List of processes and their functions:
glance-api : It accepts Image API calls for image discovery,
image retrieval and image storage.
glance-registry : it stores, processes and retrieves metadata
about images (size, type, etc.).
glance database : A database to
store the image metadata.
A storage repository for the actual
image files. Glance supports
normal file-systems, Amazon S3,
and Swift.

Nova – Compute
It provides virtual servers upon demand. Nova is
the most complicated and distributed
component of OpenStack. A large number of
processes cooperate to turn end user API
requests into running virtual machines.
References
• Salman A.Basket, Chunqiang Tang, Byung Chul Tak and Long Wang
“Dissecting Open Source Cloud Evolution: Open Stack Case Study”
• https://www.openstack.org/marketplace/books
• https://docs.openstack.org/arch-design
• https://www.researchgate.net/publication/318463331_An_overview
_of_OpenStack...
• https://cloud.google.com/compute/docs/instances/instance-life-
cycle
• https://www.geeksforgeeks.org/hypervisor
• https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-
ug/RegionsAndAZs.html#SupportedRegions
• https://docs.aws.amazon.com/amazonglacier/latest/dev/introductio
n.html
Text and References
T2 Mastering Cloud Computing: Foundations and Applications Programming
Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi

T1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)
IMP Note to Self

85
Thank You !

86
Cloud Computing
CS - 6

BITS Pilani Faculty Name: Prof. Pradnya Kashikar


pradnyak@wilp.bits-Pilani.ac.in
Today’s session

Contact List of Topic Title Text/Ref Book/


Hour (from content structure in Part A) external resource
11 3.7. Managing Virtual Resources on the Cloud: T2: Ch5
Provisioning and Migration
3.7.1. Virtual Machine Provisioning and Manageability
3.7.2. VM Provisioning Process
12 3.7.3. Virtual Machine Migration Services T2: Ch5
3.7.4. Migrations Techniques
3.7.5. VM Provisioning and Migration in action

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


IMP Note to Self

Cloud Computing 3
IMP Note to Students
➢It is important to know that just login to the session does not
guarantee the attendance.
➢Once you join the session, continue till the end to consider you
as present in the class.
➢IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT

4
Virtual Resource Management
and
Cloud Provisioning

BITS Pilani
Agenda

• Introduction
• Virtual Machine Provisioning and Manageability
• VM Provisioning Process
• Virtual Machine Migration Services
• Migrations Techniques
• VM Provisioning and Migration in action

BITS Pilani
Introduction

Two core services that enable the users to get the best out of the IaaS model in public and private cloud setups.

1) virtual machine provisioning and

2) migration services.

• Provisioning a new virtual machine in minutes: saves lots of time and effort.

• Migrations of a virtual machine in milliseconds:

• Saves time, effort,

• Making the service alive for customers, and

• Achieving the SLA/ SLO agreements and quality-of-service (QoS) specifications required.

BITS Pilani
Agenda

• Introduction
• Virtual Machine Provisioning and Manageability
• VM Provisioning Process
• Virtual Machine Migration Services
• Migrations Techniques
• VM Provisioning and Migration in action

Virtual Machine Provisioning | Cloud Computing | Lec - 18 | Bhanu Priya - YouTube

BITS Pilani
Analogy for Virtual Machine Provisioning
• Historically, when there is a need to install a new server for a certain workload to provide a particular

service for a client, lots of effort was exerted by the IT administrator, and much time was spent to
install and provision a new server.
1) Check the inventory for a new machine
2) Get one
3) Format, install OS required and
4) Install services; a server is needed along with lots of security batches and appliances.

Now, with the emergence of virtualization technology and the cloud computing IaaS Model, It is just a
matter of minutes to achieve the same task.

BITS Pilani
Analogy for Virtual Machine Provisioning: Continue..

• All you need is to provision a virtual server through a self-service interface with small steps to get

what you desire with the required specifications.

1) Provisioning this machine in a public cloud like: Amazon Elastic Compute Cloud (EC2), or

2) Using a virtualization management software package or a private cloud management solution

installed at your data centre in order to provision the virtual machine inside the organization and

within the private cloud setup.

3 | Step-by-Step Guide to Provision a Virtual Machine with Vagrant - YouTube

BITS Pilani
Analogy for Migration Services
Previously, whenever there was a need for performing a server’s upgrade or
performing maintenance tasks, you would exert a lot of time and effort, because it is
an expensive operation to maintain or upgrade a main server that has lots of
applications and users.

Now, with the advance of the revolutionized virtualization technology and migration
services associated with hypervisors’ capabilities, these tasks (maintenance,
upgrades, patches, etc.) are very easy and need no time to accomplish.

BITS Pilani
Revisiting Virtualization Technology
Virtualization can be defined as the abstraction of the four computing resources (storage, processing power, memory,

and network or I/O).

BITS Pilani
Revisiting Virtualization Technology
The virtualization layer partitions the physical resource of the underlying physical server into multiple virtual machines with
different workloads.
The virtualization layer :
1) Schedules resources,
2) Allocates physical resources,
3) Makes each virtual machine think that it totally owns the whole underlying hardware’s physical resource (Preprocessor,
disks, etc.)
4) Makes it flexible and easy to manage resources.
5) Improve the utilization of resources by multiplexing many virtual machines on one physical host.
6) The machines can be scale up and down on demand with a high level of resources’ abstraction.
7) Enables High, Reliable, and agile deployment mechanism.
8) Provides On-demand cloning and live migration.
9) Having efficient management suite for managing virtual machines.
BITS Pilani
Public Cloud and Infrastructure Services
There are many examples for vendors who publicly provide infrastructure as a service.
Example: Amazon Elastic Compute Cloud (EC2) is an Amazon EC2 Services can be leveraged via
Web services (SOAP or REST), a Web-based AWS (Amazon Web Service) management console, or
the EC2 command line tools.
The Amazon service provides hundreds of pre-made AMIs (Amazon Machine Images) with a variety of
operating systems (i.e., Linux, Open Solaris, or Windows) and pre-loaded software.
• It provides you with complete control of your computing resources and lets you run on Amazon’s
computing and infrastructure environment easily.
• It also reduces the time required for obtaining and booting a new server’s instances to minutes,
thereby allowing a quick scalable capacity and resources, up and down, as the computing
requirements change.
BITS Pilani
Public Cloud and Infrastructure Services: Continue

Amazon offers different instances’ size according to

(a) The resources’ needs (small, large, and extra large),

(b) The high CPU’s needs it provides (medium and extra large high CPU instances), and

(c) High-memory instances (extra large, double extra large, and quadruple extra large instance).

BITS Pilani
Private Cloud and Infrastructure Services
Private cloud exhibits a highly virtualized cloud data center located inside your organization’s firewall.
It may also be a private space dedicated for your company within a cloud vendor’s data center
designed to handle the organization’s workloads, and in this case it is called Virtual Private Cloud (VPC).

Private clouds exhibit the following characteristics:


1) Allow service provisioning and compute capability for an organization’s users in a self service manner.

2) Automate and provide well-managed virtualized environments.


3) Optimize computing resources, and servers’ utilization.
3) Support specific workloads.

Examples: are Eucalyptus and OpenNebula


BITS Pilani
Virtual Machine Provisioning and Manageability Life Cycle

Virtual Machine Life Cycle


• The cycle starts by a request delivered to the IT department,
stating the requirement for creating a new server for a
particular service.
• This request is being processed by the IT administration to
Start seeing the servers’ resource pool, matching these
resources with requirements
• Starting the provision of the needed virtual machine.
• Once it provisioned and started, it is ready to provide the
required service according to an SLA (Service Level
agreement ).

• Virtual is being released, and free resources.


19

BITS Pilani
Agenda

• Introduction
• Virtual Machine Provisioning and Manageability
• VM Provisioning Process
• Virtual Machine Migration Services
• Migrations Techniques
• VM Provisioning and Migration in action

BITS Pilani
VM Provisioning Process

Steps to Provision VM -
• Select a server from a pool of available servers along with the appropriate OS template you need to provision the
virtual machine.
• Load the appropriate software.
• Customize and configure the machine (e.g., IP address, Gateway) to an associated network and storage
resources.
• Finally, the virtual server is ready to start with its newly loaded S/W.

21

BITS Pilani
VM Provisioning Process

• Server provisioning is defining server’s configuration based on the organization requirements, a H/W, and S/W

component (processor, RAM, storage, networking, operating system, applications, etc.).

VMs can be provisioned by

• Manually installing an OS,

• Using a preconfigured VM template,

• Cloning an existing VM, or importing a physical server

• Server from another hosting platform.

• Physical servers can also be virtualized and provisioned using P2V (Physical to Virtual)

22

BITS Pilani
VM Provisioning using templates
• After creating a virtual machine by virtualizing a physical server, or by building a new virtual server in the virtual
environment, a template can be created out of it.
• Most virtualization management vendors (VMware, XenServer, etc.) provide the data center’s administration with the ability
to do such tasks
• Provisioning from a template reduces the time required to create a new virtual machine.
• Administrators can create different templates for different purposes.
For example –
• Vagrant provision tool using VagrantFile (template file) - Demo
• Heat – Orchestration Tool of openstack (Heat template in YAML format) - Demo
This enables the administrator to quickly provision a correctly configured virtual server on demand.
Provisioning from a template is an invaluable feature, because it reduces the time required to create a new virtual machine.

BITS Pilani
Agenda

• Introduction
• Virtual Machine Provisioning and Manageability
• VM Provisioning Process
• Virtual Machine Migration Services
• Migrations Techniques
• VM Provisioning and Migration in action

BITS Pilani
Virtual Machine Migration Services

The process of moving a virtual machine from one host server or storage location to another.

There are different techniques of VM migration-


- Hot / Live migration,
- Cold / Regular migration, and
- Live Storage migration of a virtual machine.

In this process, all key machines’ components, such as CPU, storage disks, networking, and memory, are completely
virtualized, thereby facilitating the entire state of a virtual machine to be captured by a set of easily moved data files.

25

BITS Pilani
Agenda

• Introduction
• Virtual Machine Provisioning and Manageability
• VM Provisioning Process
• Virtual Machine Migration Services
• Migrations Techniques
• VM Provisioning and Migration in action

BITS Pilani
Migration Types
• Migration can be categorized as cold or non-live migration and live migration.
• Based on granularity, the migration can be divided into single and multiple migrations.
• The design and continuous optimization and improvement of live migration mechanisms are striving
to minimize downtime and live migration time.
• The downtime is the time interval during the migration service is unavailable due to the need for
synchronization.
• For a single migration, the migration time refers to the time interval between the start of the pre-
migration phase to the finish of post-migration phases that instance is running at the destination
host.

• On the other hand, the total migration time of multiple migrations is the time interval between the
start of the first migration and the completion of the last migration.
BITS Pilani
Live Migration and High Availability

• Live migration (which is also called hot or real-time migration) can be defined as the movement of a
virtual machine from one physical host to another while being powered on.
• When it is properly carried out, this process takes place without any noticeable effect from the end
user’s point of view (a matter of milliseconds).

• One of the most significant advantages of live migration is the fact that it facilitates proactive
maintenance in case of failure, because the potential problem can be resolved before the disruption
of service occurs.
• Live migration can also be used for load balancing in which work is shared among computers in
order to optimize the utilization of available CPU resources.

BITS Pilani
Live Migration Anatomy, Xen Hypervisor Algorithm

The steps of live migration’s mechanism and how memory and virtual machine states are being transferred,

through the network, from one host A to another host B: •

The Xen hypervisor is an example for this mechanism.

The migration process has been viewed as a transactional interaction between the two hosts involved:

BITS Pilani
Live Migration Technique

BITS Pilani
Live Migration Technique
Live migration process :

Host A Host B

BITS Pilani
Live Migration Technique
Live migration process :

Host A Host B

BITS Pilani
Live Migration Technique
Live migration process :

Host A Host B

BITS Pilani
Phases in Migration

Memory and storage transmission can be categorized into three phases:

• Push phase where the instance is still running in the source host while memory pages and
disk block or writing data are pushed through the network to the destination host.

• Stop-and-Copy phase where the instance is stopped, and the memory pages or disk data is
copied to the destination across the network. At the end of the phase, the instance will
resume at the destination.

• Pull phase where the new instance executes while pulling faulted memory pages when it is
unavailable in the source from the source host across the network.

BITS Pilani
Live Migration

Image Ref: A Taxonomy of Live Migration Management in Cloud Computing TIANZHANG HE∗ and RAJKUMAR BUYYA

BITS Pilani
Pre - Migration and Post - Migration Phases

• Pre-migration and Post-migration phases are handling the computing and network
configuration.

• During the pre-migration phase, migration management software creates instance’s virtual
interfaces (VIFs) on the destination host, updates interface or ports binding, and networking
management software, such as OpenStack Neutron server, configures the logical router.

• During the post-migration phase, migration management software updates port or interface
states and rebinds the port with networking management software and the VIF driver
unplugs the instance’s virtual ports on the source host.

BITS Pilani
Live Migration Technique

• VM active on host A
• Destination host selected
Pre-migration process (Block devices mirrored)

Reservation process • Initialize container on target host

• Copy dirty pages in successive


Iterative pre-copy
rounds

Stop and copy • Suspend VM on A


• Redirect network traffic
• Synch remaining state
Commitment
• Activate on host B
• VM state on host A released

BITS Pilani
Live Migration Technique

Post-migration code runs to reattach the device’s drivers to the new machine and advertise
moved IP addresses.
This approach to failure management ensures that at least one host has a consistent VM
image at all times during migration:
1) Original host remains stable until migration commits and that the VM may be suspended
and resumed on that host with no risk of failure.
2) A migration request essentially attempts to move the VM to a new host and on any sort of
failure, execution is resumed locally, aborting the migration.
Challenges of live migration :
– VMs have lots of state in memory
– Some VMs have soft real-time
requirements
BITS Pilani
Live Migration Effect on a Running Web Server

Clark et al. evaluated the mentioned migration on Apache 1.3 Web Server; that
served a static content at a high rate. The throughput is achieved when continuously
serving a single 512-KB file to a set of 100 concurrent clients.

BITS Pilani
Live Migration Vendor Implementations Example
There are lots of VM management and provisioning tools that provide the live migration of VM facility
VMware VMotion:
a) Automatically optimize and allocate an entire pool of resources for maximum hardware utilization,
flexibility, and availability.
b) Perform hardware’s maintenance without scheduled downtime along with migrating virtual machines
away from failing or underperforming servers.

Citrix XenServer “XenMotion”:


1. Based on Xen live migrate utility, it provides the IT Administrator the facility to move a running VM
from one XenServer to another in the same pool without interrupting the service (hypothetically zero
– downtime server maintenance), making it a highly available service and also good feature to
balance workloads on the virtualized environments.
BITS Pilani
Live Migration Demo

• Using Proxmox deployment tool - For Demo- Refer to the recorded lecture

Other links :

Live Storage Migration on Hyper-V Failover Cluster Manager STEP BY STEP TUTORIAL - YouTube

Virtual Machine migration | Cloud Computing | Lec - 19 | Bhanu Priya - YouTube

BITS Pilani
Cold/regular migration

Cold migration is the migration of a powered-off virtual machine.

With cold migration, You have options of moving the associated disks from one data store to another.

• The virtual machines are not required to be on a shared storage.

1) Live migrations needs to a shared storage for virtual machines in the server’s pool, but cold migration does not.

2) In live migration for a virtual machine between two hosts, there should be certain CPU compatibility checks, but in
cold migration this checks do not apply.

3) Cold migration (VMware product ) is easy to implement and is summarized as follows:

••The configuration files, including NVRAM file (BIOS Setting), log files, and the disks of the virtual machines,
are moved from the source host to the destination host’s associated storage area.

•• The virtual machine is registered with the new host.

•• After the migration is completed, the old version of the virtual machine is deleted from the source host.

BITS Pilani
Live Storage Migration of Virtual Machine.

• This kind of migration constitutes moving the virtual disks or configuration file of a running
virtual machine to a new data store without any interruption in the availability of the virtual
machine’s service.

BITS Pilani
Live Storage Migration of Virtual Machine

BITS Pilani
VM Migration, SLA and On-Demand Computing

• Data centres by making it easy to adjust resource’s priorities to match


resource’s demand conditions
• Role on SLA, VM is consuming more than its fair share of resources at the
expense of other VMs on the same host, violations of the SLA.

• There should integration between virtualization’s management tools and SLA’s


management tools required .
– achieve balance in resources by migrating
– monitoring the workloads.
– meeting the SLA

BITS Pilani
Migration of Virtual Machines to Alternate Platforms

• Data centre's technologies should have the ability to migrate virtual machines
from one platform to another

• There are a number of ways for achieving this, depends on source and
target virtualization’s platforms and vendor’s tools to manage this facility.

• For example, the VMware converter that handles migrations between ESX
hosts: VMware server; and the VMware workstation

BITS Pilani
FUTURE DIRECTIONS

• Self-adaptive and dynamic data center.

• Performance evaluation and workload characterization of virtual workloads.

• High-performance data scaling in private and public cloud environments.

• Performance and high availability in clustered VMs through live migration.


• VM scheduling algorithms.

• Accelerating VMs live migration time.

• Cloud-wide VM migration and memory de-duplication.

• Live migration security.

BITS Pilani
References

• Rajkumar Buyya, James Broburg & Anderzej M.G, Cloud Computing – Principles and
Paradigms. John Wiley Pub, 2011

• A Taxonomy of Live Migration Management in Cloud Computing by TIANZHANG HE∗


and RAJKUMAR BUYYA

BITS Pilani
Text and References

T2 Mastering Cloud Computing: Foundations and Applications Programming


Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi

T1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


IMP Note to Self

50 BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You !

51 BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Cloud Computing
CS - 7

BITS Pilani Faculty Name: Prof. Pradnya Kashikar


pradnyak@wilp.bits-Pilani.ac.in
Today’s session

Contact List of Topic Title Text/Ref Book/ external resource


Hour (from content structure in Part A)
13 4. Containers (New) https://linuxcontainers.org/lxc/introduction/
4.1. Linux Containers - LXC and LXD
https://access.redhat.com/documentation/en-
us/red_hat_enterprise_linux_atomic_host/7/html/ov
erview_of_containers_in_red_hat_systems/introduct
ion_to_linux_containers
14 4.2. Dockers - Elements, Images, https://docs.docker.com/get-started/
Files, Containers more focus on
1: Orientation
2: Containers
3. Services
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
IMP Note to Self

Cloud Computing 3
BITS Pilani, Pilani Campus
IMP Note to Students
➢ It is important to know that just login to the session does not guarantee the
attendance.
➢ Once you join the session, continue till the end to consider you as present in the
class.
➢ IMPORTANTLY, you need to make the class more interactive by responding to
Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to respond, otherwise it
will be considered as ABSENT
4
BITS Pilani, Pilani Campus
Recap of Virtualization

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Linux Containers

● Lightweight virtualization.
● OS-level virtualization
● Allow single host to operate multiple
isolated & resource-controlled Linux
Instances.
● included in the Linux kernel called
LXC (Linux Container)

Containers are not a new technology: the earliest iterations of containers have been
around in open source Linux code for decades.

BITS Pilani, Pilani Campus


Linux Containers
What's Linux Containers? Why Linux Containers?
1. Linux Containers(LXC) allow running • Provision in seconds / milliseconds
multiple isolated Linux instances (containers)
• Near bare metal runtime performance
on the same host.
• VM-like agility – it’s still “virtualization”
2. Containers share the same kernel with
anything else that is running on it, but can be • Flexibility
• Containerize a “system”
constrained to only use a defined amount of
• Containerize “application(s)”
resources such as CPU, memory or I/O. • Lightweight
• Just enough Operating System (JeOS)
3. Acontainer is a way to isolate a group of
• Minimal per container penalty
processes from the others on a
• Growing in popularity
running Linux system.
BITS Pilani, Pilani Campus
Terminology in LXC
•Chroot :
A change root (chroot, or change root jail) is a section in the file system which is isolated
from the rest of the file system. For this purpose, the chroot command is used to change
the root of the file system.

•Cgroups :
Kernel Control Groups (commonly referred to as just “cgroups”) are a Kernel feature that
allows aggregating or partitioning tasks (processes) and all their children into hierarchical
organized groups to isolate resources.

•Container :
A “virtual machine” on the host server that can run any Linux system, for example
openSUSE, SUSE Linux Enterprise Desktop, or SUSE Linux Enterprise Server.
BITS Pilani, Pilani Campus
Terminology Continued...
•Container Name :
A name that refers to a container. The name is used by the lxc commands.

•Kernel Namespaces :
A Kernel feature to isolate some resources like network, users, and others for a group of
processes.

•LXC Host Server :


The system that contains the LXC system and provides the containers and
management control capabilities through cgroups.

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers and VMs

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Containers

BITS Pilani, Pilani Campus


Container Architecture [Example]

● namespaces allows complete isolation of an


applications' view of the operating environment,
Containers Containers Containers
including process trees, networking, user IDs and
Containers Containers Containers
Containers Containers Containers mounted file systems.
Management Interface
● cgroups: allows limitation and prioritization of
resources (CPU, memory, block I/O, network, etc.)
namespaces cgroups SELinux

● Security-Enhanced Linux (SELinux) provides secure


Drivers Linux Kernel separation of containers by applying SELinux policy
and labels. It integrates with virtual devices by
Hardware/VM using the sVirt technology.

BITS Pilani, Pilani Campus


Linux: Container Tec hnolog y
Underlying technology:
● namespace/cgroups
○ veth
○ union fs(AUFS)
○ netfilter/chroot/tc/quota
● Low-level container management
○ LXC/libvirt
● Security related
○ grsec/apparmor/SELinux
● High-level container/image management
○ docker/warden/garden/lmctfy/openVZ

BITS Pilani, Pilani Campus


BITS Pilani, Pilani Campus
Container supports separation of various resources. They are
internally realized with different technologies called
"namespace."

– Filesystem separation → Mount namespace (kernel 2.4.19)


– Hostname separation → UTS namespace (kernel 2.6.19)
– IPC separation → IPC namespace (kernel 2.6.19)
– User (UID/GID) separation → User namespace (kernel 2.6.23〜kernel 3.8)
– Processtable separation → PID namespace (kernel 2.6.24)
– Network separation → Network Namespace (kernel 2.6.24)
– Usage limit of CPU/Memory → Control groups

BITS Pilani, Pilani Campus


BITS Pilani, Pilani Campus
SELinux & Image Container

• SELinux defines access controls for the applications, processes,


and files on a system. It uses security policies, which are a set of
rules that tell SELinux what can or can't be accessed, to enforce the
access allowed by a policy.
• A container image is an unchangeable, static file that includes
executable code so it can run an isolated process on information
technology (IT) infrastructure. The image is comprised of system
libraries, system tools and other platforms settings a software
program needs to run on a containerization platform such
as Docker or CoreOS Rkt. The image shares the OS kernel of
its host machine.

BITS Pilani, Pilani Campus


How b i g i s the c o n t a i n e r ?
Top 10 image sizes (latest tag) on Docker Hub today

IMAGE NAME SIZE


busybox
Some minimal Docker images built on top of
1 MB
ubuntu
188 MB
Alpine:
swarm
17 MB
nginx 134 MB IMAGE NAME SIZE

r e g i str y 423 MB Nginx 28 Mb


redis 151 MB 64 B i t Server JRE 8 124 Mb

mysql 360 MB 64 b i t JDK 8 165 Mb


Redis 12 Mb
mongo 317 MB
node 643 MB
debian 125 MB

BITS Pilani, Pilani Campus


Benefits of Containerization over Virtualization

● Linux Containers are designed to support isolation of one or more


applications.
● System-wide changes are visible in each container.
For example, if you upgrade an application on the host machine, this
change will apply to all sandboxes that run instances of this application.
● Since containers are lightweight, a large number of them can run
simultaneously on a host machine. The theoretical maximum is 6000
containers and 12,000 bind mounts of root file system directories.

BITS Pilani, Pilani Campus


Minimalistic OS
A tiny Linux distribution created for container

http://osv.io/
https://coreos.com/

https://developer.ubuntu.com/en/snappy/

http://boot2docker.io/

http://rancher.com/rancher-os/ https://vmware.github.io/photon/

http://www.projectatomic.io/

BITS Pilani, Pilani Campus


Minimalist OS
A common set of ideas:
● Stability is enhanced through transactional upgrade/rollback semantics.
● Traditional package managers are absent and may be replaced by new
packaging systems (Snappy), or custom image builds (Atomic).

● Security is enhanced through various isolation mechanisms.


● systemd provides system/service management. In general, systemd has been adopted
almost universally among Linux distributions, so this shouldn’t be a surprise.

BITS Pilani, Pilani Campus


Minimalistic OS Comparison
RancherOS Snappy (edge
CoreOS (647.0.0) (0.23.0) Atomic (F 22) Photon – 145)
Size 164MB 20MB 151/333MB 251MB 111MB
Kernel version 3.19.3 3.19.2 4.0.0 3.19.2 3.18.0
Docker version 1.5.0 1.6.0 1.6.0 1.5.0 1.5.0

Init system systemd Docker systemd systemd systemd


Package
manager None (Docker/Rocket) None (Docker) Atomic tdnf (tyum) Snappy

Filesystem ext4 ext4 xfs ext4 ext4


Cockpit
Tools Fleet, etcd – (Anaconda, –
kickstart), atomic

https://blog.inovex.de/docker-a-comparison-of-minimalistic-operating-systems/

BITS Pilani, Pilani Campus


OS-Level Virtualization
Solaris Containers
(Zones)

LXC AIX Workload partitions


(WPARs)

OpenBSD sysjail
LXD lmctfy
https://github.com/google/lmctfy

FreeBSD jail

http://linux-vserver.org

https://en.wikipedia.org/wiki/Operating-system-level_virtualization#IMPLEMENTATIONS
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
LXC

BITS Pilani, Pilani Campus


LXC Gaps

There are gaps…


•Lack of industry tooling / support
•Full orchestration across resources (compute / storage / networking)
•Fears of security
•Not a well known technology… yet
•Integration with existing virtualization and Cloud tooling
•Not much / any industry standards
•Missing skillset
•Slower upstream support due to kernel dev process

BITS Pilani, Pilani Campus


LXC

BITS Pilani, Pilani Campus


BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
PaaS products based on Container

https://www.cloudfoundry.org/
http://stratos.apache.org/

https://www.openshift.org/

http://deis.io/
http://getcloudify.org/

https://flynn.io/
https://github.com/dawn/dawn

https://github.com/Yelp/paasta
http://www.octohost.io/
https://tsuru.io/

BITS Pilani, Pilani Campus


BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
Docker Components

BITS Pilani, Pilani Campus


The four key components that make up the entire
Docker architecture are -

• The Docker Daemon or the server


• The Docker Command Line Interface or the client
• Docker Registries
• Docker Objects -
– Images
– Containers
– Network
– Storage

BITS Pilani, Pilani Campus


Docker Daemon
• The Docker daemon, also known as ‘docker’, consistently listens to the
requests put forward by the Docker API.

• It is used to carry out all the heavy tasks such as creating and managing
Docker objects including containers, volumes, images, and networks.

• A Docker daemon is also capable of communicating with other daemons


in the same or different host machines.

• For example, in the case of a swarm cluster, the host machine’s daemon
can communicate with daemons on other nodes to carry out tasks.
BITS Pilani, Pilani Campus
Docker CLI
• The Docker users can leverage simple HTTP clients like
Command line to interact with Docker.
• When a user executes a Docker command such as
Docker run, the CLI will send this request to the docker
via the REST API.
• The Docker CLI can also communicate with over one
daemon.

BITS Pilani, Pilani Campus


Docker Registries

• The official Docker registry called Dockerhub contains several official image
repositories.
• A repository contains a set of similar Docker images that are uniquely identified
by Docker tags.
• Dockerhub provides tons of useful official and vendor-specific images to its
users. Some of them include Nginx, Apache, Python, Java, Mongo, Node,
MySQL, Ubuntu, Fedora, Centos, etc.
• You can even create your private repository inside Dockerhub and store your
custom Docker images using the Docker push command.
• Docker allows you to create your own private Docker registry in your local
machine using an image called ‘registry’.
• Once you run a container associated with the registry image, you can use the
Docker push command to push images to this private registry.

BITS Pilani, Pilani Campus


Docker Objects

A Docker user frequently interacts with Docker objects such as


• Images
• Containers
• Volumes
• Plugins
• Networks and so on.

BITS Pilani, Pilani Campus


Docker Images

• Docker Images are read-only templates that are built using multi-layers of file.
• You can build Docker images using a simple text file called Dockerfile which contains
instructions to build Docker images.
• The first instruction is a FROM instruction which can pull a base image from any Docker
registry. Once this base image layer is created, several instructions are then used to create the
container environment. Each instruction adds a new layer on top of the previous one.
A Docker image is simply a blueprint of the container environment. Once you create a
container, it creates a writable layer on top of the image, and then, you can make changes.
The images all the metadata that describes the container environment. You can either directly
pull a Docker image from Docker hub or create your customized image over a base image
using a Dockerfile.
• Once you have created a Docker image, you can push it on Docker hub or any other registry
and share it with the outside world.

BITS Pilani, Pilani Campus


Docker Containers
• Docker containers are isolated, encapsulated, packaged, and secured application environments
that contain all the packages, libraries, and dependencies required to run an application.

• For example, if you create a container associated with the Ubuntu image, you will have access
to an isolated Ubuntu environment. You can also access the bash of this Ubuntu environment and
execute commands.
Containers have all the access to the resources that you define while using the Dockerfile while
creating an image. Such configurations include build context, network connections, storage,
CPU, memory, ports, etc.

• For example, if you want access to a container with libraries of Java installed, you can use the
Java image from the Dockerhub and run a container associated with this image using the Docker
run command.
You can also create containers associated with the custom images that you create for your
application using the Docker files. Containers are very light and can be spun within a matter of
seconds.

BITS Pilani, Pilani Campus


A docker file

• A docker file is a text file that consists of all commands


so that user can call on the command line to build an
image. Use of base Docker image add and copy files, run
commands and expose the ports. The docker file can be
considered as the source code and images to make
compile for our container which is running code. The
Dockerfile are portable files which can be shared, stored
and updated as required.

BITS Pilani, Pilani Campus


Some of the docker files instruction is -

FROM - This is used for to set the base image for the instructions. It is very important to mention
this in the first line of docker file.
MAINTAINER - This instruction is used to indicate the author of the docker file and its non
executable.
RUN - This instruction allows us to execute the command on top of the existing layer and create a
new layer with the result of command execution.
CMD - This instruction doesn’t perform anything during the building of docker image. It Jus
specifies the commands that are used in the image.

BITS Pilani, Pilani Campus


LABEL - This Instruction is used to assign the metadata in the form key-value pairs. It is
always best to use few LABEL instructions as possible.
EXPOSE - This instruction is used to listen on specific as required by application servers.
ENV - This instruction is used to set the environment variables in the Docker file for the
container.
COPY - This instruction is used to copy the files and directory from specific folder to
destination folder.
WORKDIR - This instruction is used to set the current working directory for the other
instruction, i.e., RUN, CMD, COPY, etc.

BITS Pilani, Pilani Campus


Networks : You can create a secured channel so that all the isolated
containers in a cluster can communicate and share data or information
• There are 5 chief types of Docker network drivers that are available. They are -

– Bridge Driver - Bridge network driver is mostly used when you have a multi-container application running in
the same host machine. This is the default network driver.

– Host Driver - If you don’t require any type of network isolation between the Docker host machine and the
containers on the network, you can use the Host driver.

– Overlay Driver - When you use Docker swarm mode to run containers on different hosts on the same
network, you can use the overlay network driver. It allows different swarm services hosting different
components of multi-container applications to communicate with each other.

– Macvlan - The macvlan driver assigns mac addresses to each container in the network. Due to this, each
container can act as a standalone physical host. The mac addresses are used to route the traffic to appropriate
containers. This can be used in cases such as migration of a VM setup, etc.
BITS Pilani, Pilani Campus
Storage : As soon as you exit a container, all your progress and data inside the container are lost.
To avoid this, you need a solution for persistent storage . Docker provides several options for
persistent storage using which you can share, store, and backup your valuable data. These are -

• Volumes - You can use directories inside your host machine and mount them as volumes inside Docker containers.
These are located in the host machine’s file system which is outside the copy-on-write mechanism of the container.
Docker has several commands that you can use to create, manage, list, and delete volumes.

• Volume Container - You can use a dedicated container as a volume and mount it to other containers. This container will
be independent of other containers and can be easily mounted to multiple containers.

• Directory mounts - You can mount a local directory present in the host machine to a container. In the case of volumes,
the directory must be within the volumes folder in the host machine to be mounted. However, in the case of directory
mounts, you can easily mount any directory on your host as a source.

• Storage Plugins - You can use storage plugins to connect to any external storage platform such as an array, appliance,
etc, by mapping them with the host storage

BITS Pilani, Pilani Campus


Docker Architecture includes a Docker client – used to trigger Docker commands, a Docker Host –
running the Docker Daemon and a Docker Registry – storing Docker Images. The Docker Daemon running within
Docker Host is responsible for the images and containers.
To build a Docker Image, we can use the CLI
(client) to issue a build command to the Docker
Daemon (running on Docker Host). The Daemon
will then build an image based on our inputs and
save it in the Registry, which can be either Docker
hub or a local repository
If we do not want to create an image, then we can
just pull an image from the Docker hub, which
would have been built by a different user
Finally, if we have to create a running instance of
my Docker image, we can issue a run command
from the CLI, which will create a Container.

BITS Pilani, Pilani Campus


Text and References

T2 Mastering Cloud Computing: Foundations and Applications Programming


Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi

T1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


IMP Note to Self

65 BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Web Links

Containers - Explained in 4 Minutes - YouTube

Containers: cgroups, Linux kernel namespaces, ufs, Docker, and intro to Kubernetes pods - YouTube

Dockerfile >Docker Image > Docker Container | Beginners Hands-On | Step by Step - YouTube

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You !

67 BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Cloud Computing
CS - 8

BITS Pilani Faculty Name: Prof. Pradnya Kashikar


pradnyak@wilp.bits-Pilani.ac.in
Today’s session

Contact List of Topic Title Text/Ref Book/ external resource


Hour (from content structure in Part A)
15 4.2. Dockers - Files, Containers https://docs.docker.com/get-started/

16 4.3. Cloud orchestration technologies https://www.ibm.com/developerworks/cloud/library


/cl-cloud-orchestration-technologies-trs/index.html

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


IMP Note to Self

Cloud Computing 3
BITS Pilani, Pilani Campus
IMP Note to Students
➢ It is important to know that just login to the session does not guarantee the
attendance.
➢ Once you join the session, continue till the end to consider you as present in the
class.
➢ IMPORTANTLY, you need to make the class more interactive by responding to
Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to respond, otherwise it
will be considered as ABSENT
4
BITS Pilani, Pilani Campus
Docker Architecture

Docker Architecture
includes a Docker client – used to trigger
Docker commands, a Docker Host –
running the Docker Daemon and a Docker
Registry – storing Docker Images. The
Docker Daemon running within Docker
Host is responsible for the images and
containers.

BITS Pilani, Pilani Campus


Docker Architecture

To build a Docker Image, we can use the CLI


(client) to issue a build command to the Docker
Daemon (running on Docker Host). The Daemon
will then build an image based on our inputs and
save it in the Registry, which can be either Docker
hub or a local repository
If we do not want to create an image, then we can
just pull an image from the Docker hub, which
would have been built by a different user
Finally, if we have to create a running instance of
my Docker image, we can issue a run command
from the CLI, which will create a Container.

BITS Pilani, Pilani Campus


BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
What is container orchestration?
Container orchestration is all about managing the lifecycles of containers,
especially in large, dynamic environments.

Software teams use container orchestration to control and automate many tasks:

• Provisioning and deployment of containers


• Redundancy and availability of containers
• Scaling up or removing containers to spread application load evenly across host
infrastructure
• Movement of containers from one host to another if there is a shortage of resources in a host, or if a host dies
• Allocation of resources between containers
• External exposure of services running in a container with the outside world
• Load balancing of service discovery between containers
• Health monitoring of containers and hosts
• Configuration of an application in relation to the containers running it

BITS Pilani, Pilani Campus


BITS Pilani, Pilani Campus
What Is Container Orchestration Used For?
In Brief

Orchestrating containers has various uses, including:


• Configure and schedule
• Load balancing among containers
• Allocate resources among containers
• Monitor the health of containers and hosts

BITS Pilani, Pilani Campus


BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
ECS - Elastic Container Service

BITS Pilani, Pilani Campus


How does container orchestration work?

Container orchestration works based on tools, like Kubernetes or Docker Swarm

• Kubernetes: the configuration of our application in a YAML or JSON file, depending on the
orchestration tool. These configurations files (for example, docker-compose.yml) are where the
orchestration tool gather container images (for example, from Docker Hub), how to establish
networking between containers, how to mount storage volumes, and where to store logs for that
container. Further branch and version control these configuration files so they can deploy the same
applications across different development and testing environments before deploying them to
production clusters.

• Containers are deployed onto hosts, usually in replicated groups. When it’s time to deploy a new
container into a cluster, the container orchestration tool schedules the deployment and looks for the
most appropriate host to place the container based on predefined constraints (for example, CPU or
memory availability).

• Once the container is running on the host, the orchestration tool manages its lifecycle according to
the specifications you laid out in the container’s definition file (for example, its Dockerfile.
BITS Pilani, Pilani Campus
Kubernetes: the gold standard

• The container orchestration tools is that we can use them in any environment in which you can run
containers. And containers are supported in just about any kind of environment these days, from traditional
on-premise servers to public cloud instances running in Amazon Web Services (AWS), Google Cloud
Platform (GCP), or Microsoft Azure. Additionally, most container orchestration tools are built with Docker
containers in mind.

• Kubernetes a self-service Platform-as-a-Service (PaaS) that creates a hardware layer abstraction for
development teams. Kubernetes is also extremely portable. It runs on Amazon Web Services
(AWS), Microsoft Azure, the Google Cloud Platform (GCP), or in on-premise installations. we can move
workloads without having to redesign your applications or completely rethink your infrastructure—which
helps you to standardize on a platform and avoid vendor lock-in.

BITS Pilani, Pilani Campus


Kubernetes: the gold standard

– The main architecture components of Kubernetes include:

– Cluster. A cluster is a set of nodes with at least one master node and several worker nodes (sometimes
referred to minions) that can be virtual or physical machines.

– Kubernetes master. The master manages the scheduling and deployment of application instances across
nodes, and the full set of services the master node runs is known as the control plane. The master
communicates with nodes through the Kubernetes API server. The scheduler assigns nodes to pods (one or
more containers) depending on the resource and policy constraints you’ve defined.

– Kubelet. Each Kubernetes node runs an agent process called a kubelet that’s responsible for managing the
state of the node: starting, stopping, and maintaining application containers based on instructions from the
control plane. A kubelet receives all of its information from the Kubernetes API server.

BITS Pilani, Pilani Campus


Kubernetes: the gold standard

• Pods. The basic scheduling unit, which consists of one or more containers guaranteed to be co-located on the
host machine and able to share resources. Each pod is assigned a unique IP address within the cluster,
allowing the application to use ports without conflict. You describe the desired state of the containers in a pod
through a YAML or JSON object called a PodSpec. These objects are passed to the kubelet through the API
server.

• Deployments, replicas, and ReplicaSets. A deployment is a YAML object that defines the pods and the
number of container instances, called replicas, for each pod. You define the number of replicas you want to
have running in the cluster via a ReplicaSet, which is part of the deployment object. So, for example, if a node
running a pod dies, the replica set will ensure that another pod is scheduled on another available node.

BITS Pilani, Pilani Campus


Docker Swarm: a hardy bit player

• Docker Swarm: a fully integrated and open-source container orchestration tool for
packaging and running applications as containers, deploying them, and even locating
container images from other hosts.

• The main architecture components of Swarm include:

• Swarm. Like a cluster in Kubernetes, a swarm is a set of nodes with at least one master
node and several worker nodes that can be virtual or physical machines.

BITS Pilani, Pilani Campus


Docker Swarm: a hardy bit player

• Service. A service is the tasks a manager or agent nodes must perform on the swarm, as defined by a swarm
administrator. A service defines which container images the swarm should use and which commands the
swarm will run in each container. A service in this context is analogous to a microservice; for example, it’s
where you’d define configuration parameters for an nginx. web server running in your swarm. You also
define parameters for replicas in the service definition.

• Manager node. When you deploy an application into a swarm, the manager node provides several
functions: it delivers work (in the form of tasks) to worker nodes, and it also manages the state of the swarm
to which it belongs. The manager node can run the same services worker nodes do, but you can also
configure them to only run manager node-related services.

BITS Pilani, Pilani Campus


Docker Swarm: a hardy bit player

• Worker nodes. These nodes run tasks distributed by the manager node in the swarm. Each worker node
runs an agent that reports back to the master node about the state of the tasks assigned to it, so the
manager node can keep track of services and tasks running in the swarm.

• Task. Tasks are Docker containers that execute the commands you defined in the service. Manager
nodes assign tasks to worker nodes, and after this assignment, the task cannot be moved to another
worker. If the task fails in a replica set, the manager will assign a new version of that task to another
available node in the swarm.

BITS Pilani, Pilani Campus


Docker

BITS Pilani, Pilani Campus


BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
Text and References

T2 Mastering Cloud Computing: Foundations and Applications Programming


Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi

T1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


IMP Note to Self

63 BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Web links for reference

What Is Docker? | What Is Docker And How It Works? | Docker Tutorial For Beginners |
Simplilearn (youtube.com)

Dockerfile >Docker Image > Docker Container | Beginners Hands-On | Step by Step (youtube.com)

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You !

65 BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956

You might also like