You are on page 1of 9

UNIT 1 : Cloud Programming Models

18 January 2022 16:28

Cloud computing is a model for enabling ubiquitous(where-ever), convenient, on-demand (when-ever) network access to a shared pool of configurable computing
resources that may be shared and can be rapidly provisioned and released with minimal management effort or service provider interaction .

Features of Cloud Computing


• On demand self-service: Compute, storage or platform resources required by user are self-provisioned or auto-provisioned with minimal configuration.
• Broad Network access: Ubiquitous access to cloud applications from computing devices is critical to the success of the cloud platform
• Resource Pooling: Services should be able to support concurrent users; resources need to be shared between users and clients to reduce costs.
• Scalability: Cloud services can accommodate larger or smaller loads while supporting some of the expectations of QoS like response time.
• Elasticity: Should be able to rapidly increase\decrease computing resources as needed.
• Measured service: ability to "pay as you go" where consumer pays only for the resources he has used.

Benefits of Cloud Computing:


• Agility: Quickly spin up resources as you need them from infrastructure services, such as compute, storage, and databases
• Elasticity: Scale these resources up or down to instantly grow and shrink capacity as your business needs change.
• Cost savings: Cloud allows you to trade capital expenses for variable expenses, and only pay for IT as you consume it.
• Deploy globally in minutes: Expansion to new geographical regions and deploy globally in minutes.

High Performance Computing (HPC) providing this capability for short amounts of time (no-longer optimal)
(How quickly and how correct are the solutions)
Homogenous nodes where performance increases due to presence of more computational power, it can be thought as an increase of computational resources
present in a workstation to a mammoth scale.

High Throughput Computing (HTC) what is needed using distributed and parallel computing.
(How to handle parallelization of given problem )
Distributed computing where given problem is split and scheduled among various resources

Computing paradigms

Distributed computing: A system that consists of multiple autonomous computers, each having its own private memory, communicating through a network, which
is accomplished using message passing.(Multiple independent computers connected to one another through some network)
Include features such as fault tolerance , concurrency of components.

Distributed system models:


1. Architectural models:
a. Ways in which responsibilities distributed between system components and how are these components placed.
Example: Cluster architecture, P2P, Client-server model
2. Interaction models:
a. How is time handled, Limits on process execution message delivery and clock drifts
Example: Synchronous distributed systems, Asynchronous distributed systems.
3. Fault models:
a. What kind of faults can occur and what are their effects
Example: Omission faults, Arbitrary faults, Timing faults

Architectural Models:
Cluster computing: A cluster is a set of loosely or tightly connected computers that work together so that, in many aspects, they can be viewed as a single system.
They all run the same image of the OS.
Each node in a cluster is tasked to perform the same task, scheduled by the software.
It is a hierarchical construction of a network which can be scaled by increasing the number of nodes.
The cluster is connected to the Internet via a VPN (virtual private network) gateway, the gateway IP address locates the cluster.
The system image of a computer is decided by the way the OS manages the shared cluster resources.
All resources of a server node are managed by their own OS, thus most clusters have multiple system images as a result of having many autonomous nodes under
different OS control.

P2P computing: Every node acts as both a client and server, peers autonomously join or leave the network.
No central coordination or central database is needed, no peer machine has a global view of the entire network.
It is self-organizing system with distribute control, this implies that no master-slave relationship exists among the peers.
Processing and communication loads for access to objects are distributed across many computers and access links.

Client-Server Model: System is structured as a set of processes, called servers, that offer services to the users, called clients.
It is usually based on simple request/reply protocol, implemented with send/receive primitives or using RPC.
The client asks the server for a service, server does the work and returns the result or the error code if the required work can't be done.

Interaction Models:
Synchronous Distributed Systems: Lower and upper bounds on execution time of processes can be set
Transmitted messages are received within a known bounded time
Drift rates between local clocks have a known bound

Consequences of having synchronous systems:


Need of global physical time
Predictable in terms of timing, only such systems can be used for hard real -time applications
Possible and safe to use timeouts in order to detect failures of a process or communication link

Asynchronous Distributed Systems: No bound on process execution time

Cloud Computing Page 1


Asynchronous Distributed Systems: No bound on process execution time
No bound on message transmission delays
No bound on drift rates between local clocks

Consequences of having asynchronous systems:


No global physical time, reasoning can be only in terms of logical time
Unpredictable in terms of timing
Cannot use timeouts to diagnose issues.

Parallel computing: All processors are either tightly coupled with centralized shared memory or loosely coupled with distributed memory.
Communication between processors is achieved using shared memory or message passing. (Single computer with many processors)

Architecture involves several processors simultaneously executing multiple, smaller calculations broken down from overall larger, complex problem.
These smaller pieces are independent in nature.
Results are combined upon completion as part of overall algorithm.
Computation requests are distributed in small chunks by the application server that are then executed simultaneously on each server.

The different types of parallelism are:


• Bit level
• Instruction level
• Task level
• Data

Parallel computing architecture:


• Multi-core computing: Computer processor integrated circuit with 2 or more separate processing cores, each of which executes progra m instructions in parallel.
• Symmetric multiprocessing: There exists 2 or more independent, homogeneous processors which are controlled by single OS insta nce which treats all
processors equally, and is connected to a single shared main memory with full access to all common resources and devices.
• Massively parallel computing: Use of numerous computer processors to simultaneously execute a set of computation in parallel.

It also offers:
Application checkpointing: To help provide fault tolerance, to help restart from point in instance of failure
Automatic parallelization: Conversion of sequential code to parallel code
Parallel Programming Languages: Help in developing programs that use shared memory.

Grid computing: Enable computing to be delivered as utility is the main objective.


It was meant to be used by users who access the computing devices without knowing where the resources is located or what hardware is running.
It can be thought of as a distributed system with non-interactive workloads that involve many files.
Each nodes is set to perform a different task/application.

Features of grid computing:


1. Coordinates resources that are not subject to centralized control.
2. Using standard open, general purpose protocols and interfaces
3. To deliver non-trivial quality of service.

Resources sharing agreements are needed to formed among set of participating parties (Direct access of resources to other users).

Computational grid is a hardware and software infrastructure that provides dependable, consistent, inexpensive access to high-end computational capabilities.

Data grid is storage component of the grid. Data is often distributed , this grid provides access to local and remote data required to complete compute intensive
calculations.

Grid Environment:
Grid information Service: System collects details of resources available and passes it resource broker.
User: Send computation or data intensive application to Global Grid in order to speed up the execution of application
Resource Broker: Distribute jobs in application to grid resources based on users requirements and availability.
Grid Resources: Clusters, PCs, Supercomputers, Databases in global grid to execute users jobs.

Difference between distributed and grid computing


Distributed computing Grid computing
Uses a centralized resource manager and all nodes cooperatively Utilized a structure where each node has its own resource manager
work together as single unified resource and the system does not act as a single unit

Difference between cluster and grid computing


Cluster computing Grid computing
A set of computers or devices that work together so that they Use of widely distributed computing resources to reach
can be viewed as a single system a common goal
Nodes have the same hardware and OS Nodes have different hardware and various OS
Each node performs the same task controlled and scheduled Each node performs different tasks
by the hardware
A homogenous network A heterogeneous network
Located in single location Devices located in various locations
Devices are connected through a fast LAN Devices are connected through a low speed network or internet
Centralized resource manager Each node has its own node manager that behaves similarly to
an independent entity

Cloud Computing Page 2


an independent entity

Note: Grid network can be thought as an interconnection of clusters and usually has thousands of hosts

Difference between Cloud and Grid computing


Cloud computing Grid computing
Follows client-server computing architecture Follows distributed computing architecture
Scalability is high Scalability is normal
More flexible Less flexible
Operates as a centralized management system Operates as a decentralized management system
Service-oriented Application oriented

Cloud computing: A pool of resources that can either be centralized or distributed.


Clouds can be implemented using parallel or distributed computing or both.
They can be built with physical or virtual resources over large data centers.
It can be referred as a service that enables parallel computing.

Cloud Computing Models and Business Case


Cloud model is composed of 5 essential characteristics:
• On demand self-service (should be easy for user to choose resources he needs for his use)
• Ubiquitous network access (Broad network access)
• Resource Pooling
• Rapid elasticity
• Measured service (pay per use)

Technologies that enable cloud computing are:


1. Broadband networks and internet architecture : All clouds must be connected to a network
2. Data center technology
3. Virtualization technology: Ability to mimic several independent machines on a single device/hardware.
4. Web technology: Enables the communication and building applications on the cloud (URL for objects stored on the cloud, etc)
5. Multitenant technology: Multiple clients all using the same physical resource but logically isolated from one another

Service models of cloud computing


• On-premises: The entire infrastructure is paid for and managed by the user.
This includes maintenance of applications, data, middleware, servers, storage, etc.

• Infrastructure (as service): The physical resources are abstracted into virtual servers and storage.
The compute, storage and networking resources are available on-demand, on pay-as-you go basis.
The virtual resources are allocated on demand to cloud users, and these can be configured into virtual systems on which and d esired software can be installed.
Offers the greatest flexibility from the available options but also is the most difficult to manage and configure.
Suited for users who want complete control over the application/software stack that they run

• Platform (as service): Provides a platform built on top of the abstracted hardware that can be used to develop applications by developers.
It has commands available that will allow them to allocate middleware servers (a database of certain kind, OS of certain kind ) , configure and load data into the
middleware.
We can develop an application that runs on top of the middleware.

• Software (as service): Provides the complete application as a service, enabling consumers to use the cloud without worrying about all the
complexities of hardware, OS or even application installation.
It provides the least flexibility amongst the options.

Technology challenges
• Scalability: ability to accommodate larger or smaller loads while supporting some of the expectations of QoS (Quality of service) like r esponse time.
This will need to support:
Scale with the spread of wide range of environment
Sharing of the same with many users.
• Elasticity: actual/practical increase or decreasing the resources to cope with loads dynamically.
Scale up (vertical scaling) : creating resources using virtualization
Scale out (Horizontal scaling): actually adding hardware resources
Scale down
Resource allocation and workload placement algorithms are required
• Performance Unpredictability : Resources are shared. We have to guarantee performance isolation of shared resources.
• Reliability and Availability : Hardware failures and software bugs can be expected to occur relatively frequently.
Problem is complicated by the fact failures can trigger further failures, leading to avalanche of outages.
Factors affecting reliability and availability
High number of components
Complexity
• Security: Considerations towards violation of confidentiality, data privacy, and data leakage and loss.
Isolation of users, legal and process issues (physical security)
Cloud service provider are providing auditable and safe identity management, access control procedures for authentication and authorization,
use firewalls, encryption, privacy protocols, recovery policies, SLA's, etc.
• Compliance: Process of meeting the requirements of the service users or it could be the laws of the country.
The technology will need to enable business operations to comply with the expectations of customers.
Challenge for user would be to know if a cloud provider is complying with privacy rules, or the laws and for the cloud servic e provider to be enabled
by technology for compliance.
• Multi-Tenancy: Mode of operation where a single instance of the component serves multiple tenants or groups of users.

Cloud Computing Page 3


• Multi-Tenancy: Mode of operation where a single instance of the component serves multiple tenants or groups of users.
The instances (tenants) are logically isolated, but physically integrated and the architecture provides every tenant with a d edicated
share of the instance, including configurations and data.
Sharing of same database by multiple users.
Share without compromising security.
• Interoperability
• Portability
• Network Capability and Computing Performance
• Application Recovery: In addition to directing new requests to a server that is up, it is necessary to recover old requests.
An application independent method of doing this is checkpoint/restart.
• Availability: Achieved by using redundancy in infrastructure
To ensure high availability 2 techniques used are:
1. Failure detection : Heartbeat, probes
2. Application recovery

Cloud Deployment Models


Public Cloud: The infrastructure for the public cloud is owned by the cloud vendor.
The cloud use pays the cloud vendor for using the infrastructure.
The cloud service runs on remote servers that the vendor manages. (Backup, security, upgradation, recovery)
Customers of this service access it through the internet.
A pool of virtual resources is automatically provisioned and allocated among multiple clients through a self-service interface.

Resource Allocation: Tenants outside the providers firewall share cloud services and virtual resources that come from the providers
set of infrastructure, platforms and software.
Usage Agreements: While resources are distributed on an as-needed basis, a pay-per-use model isn't necessary component.
Some customers use public clouds at no costs.
Management: At a minimum, the provider maintains the hardware underneath the cloud, supports the network, and manages the
virtualization of the software.

Why to use it Why not to use it


Cheaper than private also reduces significant initial investments Security
Less server management Compliance
Time saving (can be used to develop application instead of server maintenance) Interoperability and vendor lock-in
Location independence
Virtually unlimited scalability, resources expand to meet users demands and traffic
spikes, users achieve greater redundancy and high availability

Private Cloud: It utilizes the in-house infrastructure to host the different cloud services.
It is computing model that offers a proprietary environment dedicated to a single business.
The strategy might consist of hardware hosted locally at a facility owned by a business, or may be hosted by a cloud service provider.
Virtual private clouds are typically paid for on a rolling basis, but provisioned hardware and storage configurations maintain the benefits
of a secure, exclusive network.

Internal private cloud: It is hosted on organizations own premises, and is managed by them internally.
The organization manages and operates the internal cloud themselves.
This means they will purchase the servers, keep them up and running and administer the software that runs on the servers.

Hosted private cloud: It is off-premise instead of on-premise, meaning the cloud servers are not physically located at the grounds
of the organization using them. Instead, a third party manages and hosts the cloud remotely.

Why to use it Why not to use it


More control Cost of building own private cloud requires very large capital
Security Under-utilization, cost of capacity underutilization is up to the organization not to the provider
Compliance Platform scaling
Customization, they are fully configurable by the organization

Hybrid Clouds: Cloud computing environment that uses a mix of on-premise, private cloud and third-party, public cloud services.
It involves a connection from the an on-premise data center to a public cloud.
It allows enterprises to deploy workloads in private IT environments or public clouds and move between them as computing needs and costs change
Helps in providing greater flexibility and more data deployment options.
Workload includes the network, hosting and web service features of an application.

Cloud Architecture
Refers to the various components engineered to leverage the power of cloud resources to solve business problems.
Cloud architecture defines the components as well as the relationships between them.

These components typically consist of:


1. A front end platform (fat client, thin client, mobile)
2. Back end platform (servers, storage)
3. A cloud based delivery
4. A network (Internet, Intranet,)
Combination of these components make up the cloud computing architecture.

Frontend:

Cloud Computing Page 4


Frontend:
It is used by the client, it contains client-side interfaces and applications that are required to access the cloud computing platforms.
The front end includes web browsers (including chrome, Firefox, etc.) thin & fat clients and mobile devices.
It also provides certain business logic

Backend:
It consists of the application server (application logic and most of the business logic) and data(storage server and part of the business logic)
It used by the service provider. It manages all resources that are required to provide cloud computing services.
It includes: Data storage, Security, VM's , deploying models, servers, traffic control mechanisms, Fault tolerance, Billing, Backups, Scaling.
Set of components involved in backend are:
1. Application: Part offered for the client application which will use the cloud.
2. Service: Piece of software, will determine and enable the appropriate service to be accessed.
3. Runtime cloud: Provides the execution and runtime environment to the VM dependent on the service model
4. Storage: Provides a huge amount of storage capacity in the cloud to store and manage data.
5. Infrastructure
6. Management: Components such as application, service and infrastructure needs to be managed and coordination between them need s to be established.
7. Security

Cloud Platform design goals


1. Scalability
2. Efficiency
3. Reliability and Availability
4. Simplifying users experience

Infrastructure as a Service (IaaS):


Capability provided to the consumer is to provision processing (compute), storage, networks and other fundamental computing resources
where the consumer is able to deploy and run the arbitrary software, which can include OS and applications.
The consumer does not manage or control the underlying cloud infrastructure but has control over the OS, storage, deployed applications,
and possibly limited control of select networking components.
The use of IaaS has single ownership of the hardware allotted to him. The provider has control over the actual hardware and
the cloud user can request allocation of virtual resources.
It enables ed users to scale and shrink resources on an as-needed basis, reducing the need for high, up-front capital expenditures.

Platform and architecture of IaaS


Physical data centers: Providers will manage large data centers, that contain the physical machines required to power the various layers of
Abstraction on top of them and that are made available to end users over the web.
End users do not interact directly with the physical infrastructure, but t is provided as a server to them.

Compute: Providers manage the hypervisors and end users can then programmatically provision virtual "instances" with desired amounts of compute and memory.
Most providers offer both CPU's and GPU's for different types of workloads.
Cloud compute also typically comes paired with supporting services like auto scaling and load balancing that provide the scale and performance
Characteristics that make cloud desirable in the first place.

Network: Is a form of Software Defined Networking in which traditional networking hardware, such as routers and switches, are made available programmatically.

Storage: Three primary types of cloud storage are block, file and object.
Block and file are common in traditional data centers but can often struggle with scale, performance and distributed characteristics of cloud.
Object storage is highly distributed, it leverages commodity hardware, data can be accessed easily over HTTP, and scale is not essentially limitless but performance
scales linearly as the cluster grows.

Programming Model
It is an execution model linked into an API or particular pattern of code, they are 2 execution models in play:
1. Base programming language
2. Programming model

Language execution models does not change on the cloud, but there is an independent execution model of the programming model
The eco-system for a program execution in terms of compute, memory, storage, networks and IP addresses which were available locally on system is now not
guaranteed and would need to be factored in. It will also need to ensure that requisite environment is setup for each of the different service models of the cloud.
Basically not only think about how to structure your code but also thinking about the run time environment and all the dependencies it will need for it to run on
cloud which could include the compilers, the OS, the network connections to the system, how the program and its data will be stored, etc

Service Oriented Architecture


Defines a way to make software components reusable via service interfaces. They utilize common communication standards in such a way that they can rapidly
be incorporated into new application without having to perform deep integration each time.
The service provides loose coupling, meaning they can be called with little to no knowledge of how the integration is implemented underneath.
Services are exposed using standard network protocols - SOAP (Simple Object Access protocol)/HTTP- to send requests to read or change data
They are published in such a way that enables users to find them and reuse them to assemble new applications.

REST (Representational State Transfer)


Cloud architecture involves number of distributed autonomous systems or components or applications communicating or interacting between themselves for
performing an application request.
REST helps in building client-friendly distributed system that are simple to understand and simple to scale and exploit web architectures to implementers benefit.
API adhering to REST helps get benefits of a client server distributed architecture by reducing the complexities of distributed systems.
REST is typically based on HTTP methods to access resources via URL encoded parameters and the use of JSON/XML to transmit data

REST Design Principles


1. Client Server (Separation of concern ):
Sets the constraint of the system to be built in such a fashion where the client can change and evolve without having to chan ge anything on the server, and vice

Cloud Computing Page 5


Sets the constraint of the system to be built in such a fashion where the client can change and evolve without having to chan ge anything on the server, and vice
versa. It is about the client sends the server a message, and how the server rejects or responds to the client.

2. Stateless Constraint:
Communication between the client and the server must remain stateless between requests.
Each request the client makes should contain all the information needed for the server to answer successfully.
All of the state information should be transferred back to the client as part of the response and cannot take advantage of an y stored context on the server.
Session state is kept entirely on the client.

3. Cache Constraint:
In order to improve network efficiency, cache constraint are added and require that the data withing a servers response to a request, be implicitly or explicitly
labeled/marked as cacheable. If a response is cacheable, then client cache is given the right to reuse that response data for later.

4. Uniform Interface Constraint:


It makes it generic and improves the overall visibility of interactions on how the client and server exchange requests and re sponses.
Implementations are decoupled from the services they provide. This could impact performance for non -web based data interactions.
a. Resource and Resource Identification
b. Manipulations of Resources through Representations
c. Self-descriptive messages
d. Hypermedia as the engine of application state.
These are the interface constraints.

5. Layered System Constraint:


Messages between client and server can go through a hierarchy of intermediate components where these intermediate components should not be able to see
the next layer.
Interaction between the client and server should not be affected by these devices and this ensures independence and bounding of complexity.
All communications remains consistent, even if one of the layers is added or removed.

REST Architectural Elements


Resources and Resource Identification
Key abstraction of information in REST is a resource. Any information that can be named can be a resource, such as a document or image or a temporal service.
These services exposes a set of resources which identify targets of interaction with the clients.
Each resource has a unique name and can be retrieved using URI (Uniform Resource Identifier) which provides a global addressing space for resources.
URI URL
The name of the object on the web Subset of the URI
Identifies name of the resource by: Specifies where to find the resource
Name, Location, Both Specifies How to retrieve the resource

Uniform, Constrained API interface


Interaction with REST web services is done via HTTP client/server cacheable protocol.
Resources are manipulated using a fixed set of CRUD interfaces (GET, PUT, POST, DELETE)

Stateless Interactions:
Systems that follow the REST paradigm are stateless, meaning that the server does not need to know anything about the state of the client and vice versa.
Both the server and client can understand the messages, even without seeing previous messages.
Each client request is treated independently.
Clients isolated against changes on the server.
Promotes redundancy (unlocks performances): doesn’t really need to know which server client was talking to, No synchronization overhead.

Representation:
It is the snapshot in time of the state of a given resource.
Sequence of bytes made up of data, plus representation metadata to describe those bytes.
It capture current or intended state of the resource and helps transferring that representation between interacting components.
Message type is hypermedia, which refers to any content that contains links to other forms of media such as images, movies and text.
It allows the client to navigate to the appropriate resources by traversing the hypermedia links.

Self-Descriptive Messages:
Messages includes enough information to descriptive how to process the message.
It enables intermediaries to do more with message without parsing the message contents.
Resources are decoupled from their representation so that their content can be accesses in a variety of standard formats.
It empowers clients to ask for data in a form they understand.

Web services
A software system designed to support interoperable machine-to-machine interaction over a network.
It is referred to as self-contained, self-describing, modular application designed to be used and accessible by other software applications across the web.
Once a web is deployed, other applications and other web services can discover and invoke the deployed service.
Other systems interact with the web service in a manner prescribed by its description.

Prominent ways of implementing web services.


1. Simple Object Access Protocol (SOAP)
2. REST

SOAP
It is a protocol which was designed before REST came into picture. Main idea behind creating SOAP was to ensure that programs built on different platforms and
programming languages could securely exchange data.
It provides a structure for transmission of XML documents over various internet protocols, such as SMTP, HTTP and FTP.
Messages consists of elements called envelope, which is used ort encapsulate all of the data in the SOAP message which contains a Header element the contains

Cloud Computing Page 6


Messages consists of elements called envelope, which is used ort encapsulate all of the data in the SOAP message which contains a Header element the contains
header information such as authentication credentials which can be used by the calling application, and a body element that carries the payload of the message.
WSDL or web service description language describes the functionality of the SOAP based webservice.

Platform as a Service
It is a complete development and deployment environment in the cloud with resources that enable you to deliver cloud-based apps.
Purchase of resources need from a cloud service provider on pay-as-you go bases and access them over a secure internet connection.
Abstractions provides a platform built on top of the abstracted hardware that can be used by developers to build cloud apps.
This platform is delivered via the web, so developers have the freedom to concentrate building the app rather than focusing on the OS, infrastructure.
It provides servers, storage, networking but also middleware development tools, business intelligence (BI) services, DBMS.
It is designed to support the complete web application lifecycle : building, testing, deploying, managing and updating.

Advantages of PaaS:
1. Faster time to market: There's no need to purchase hardware, install it and maintain it, we can simply use clouds infrastruct ure and start development process.
2. Faster, easier, less risky adoption of a wider range of resources: PaaS platforms typically include access to a greater varie ty of choices up and down the
application development stack.
3. Develop for multiple platforms including mobile devices: Some service providers give you development options for multiple pla tforms
4. Easy, cost-effective scalability: Application developed and can be scaled on demand by purchasing the right amount of additional capacit y needed.
5. Efficiently manage the software lifecycle: It provides all of the capabilities that you need to support the complete web appl ication lifecycle: building, testing,
deploying, managing and updating within the same integrated environment.
6. Lower costs: Charges users based on usage of resources and there's no initial investment for infrastructure and its setup.
7. API development and management: Develop, run, manage and secure API's and microservices.
8. It can support IoT

Limitations of PaaS:
1. Operational Limitation: Customized cloud operations with management automation workflows may not apply to PaaS solutions, as the platform tends to limit
operational capabilities for end users.
2. Vendor lock-in: Business and technical requirements that drive decisions for specific PaaS solution may not apply in the future.
If vendor has not provisioned migration policies, switching to another provider will be not be possible without effecting the business.
3. Runtime issues: Solutions may not be optimized for the language and frameworks for the developers choice. Specific framework versions may not be available
or perform optimally with the service.
4. Data security: Organizations can run their own apps and services using PaaS, but data residing in third -party vendors poses security risks and concerns.
5. Integrations: Complexity of connecting the data stored within an onsite data center or off -premise cloud is increased, which may affect which apps and services
can be adopted using PaaS.

Communication using message queues


Communication mechanism: There are different styles or natures through which these communications may happen like synchronous/asynchronous or based on
the request and number of processors of the service.
If the application is a monolith with multiple processes, simple, intra-system IPC mechanism may work to communicate with processes which need to interact, but
when built as set of microservices running on different systems, communication mechanisms can support inter-service will be essential

Interaction styles
1. Synchronous (request-response): TCP features
2. Asynchronous : Client doesn’t block, and the response, if any, isn't necessarily sent immediately, can support high rates of data flow.
Advantages:
a. Reduced coupling: Message sender doesn’t need to know about the consumer
b. Multiple subscribers: Multiple consumers can subscribe to receive events.
c. Failure isolation: If consumer fails, the send can still send messages, the messages will be picked up when the consumer recovers
d. Load leveling: A queue can act as buffer to level that workload, so that receivers can process messages at their own rates

Disadvantages:
a. Coupling with messaging infrastructure: Using particular messaging infrastructure may cause tight coupling with infrastructure. Difficult to switch later
b. Latency: Latency of operation increases if queue fills up
c. Complexity
d. Throughput: Each message in message queue requires at least one queue operation and one dequeue operation.

One-to-one interaction: Each client request is processed by exactly one service.


1. Synchronous request/response : Client makes request to service waits for response. Services are tightly coupled
2. Asynchronous Request/response: Service client sends request to service, which replies asynchronously
3. One way notification: Service client sends a request to service, but no reply is expected or sent

One-to-many interaction: Each request is processed by multiple services:


1. Publish/subscribe: Client publishes a notification message, which is consumed by 0 or more interested services
2. Publish/async responses: A client publishes a request message and then waits for a certain amount of time for responses from interested services.

Cloud Computing Page 7


UNIT 2: Virtualization basics and Types of Hypervisors
08 February 2022 10:46

Virtualization
It is a framework or methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technology such as
hardware/software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others.
It is a practice of presenting and partitioning computing resources in a logical way rather than partitioning according to physical reality.

Virtual machine (VM) is an isolated duplicate of the physical machine; It's an execution environment (logically) identical to a physical machine, with the ability to execute a full OS

In traditional environment OS handles the bare hardware and also provides environment for development and execution of applications.
In virtual machines there exists an additional layer between the hardware and the applications called the hypervisors/virtual machine monitor which allows the user to create
multiple independent environments upon the same hardware where each environment is oblivious to the fact that it is sharing its resources with others.
Each time programs access the hardware, the hypervisor captures the process, in the sense it acts a traditional OS

Why virtual machine


1. Operating system diversity: Can run multiple types of OS on the same hardware
2. Security/Isolation: Hypervisor separates the VMs from one another and isolated the hardware from VMs
3. Rapid Provisioning/Server consolidation: On demand provisioning of hardware resources
4. High Availability/Load balancing: Ability to live-migrate a VM to other physical server
5. Encapsulation: The execution environment of an application is encapsulated within the VM

Three requirements for a Virtual machine monitor\hypervisor:


1. Should provide an environment for programs which is essentially identical to the original machine.
2. Programs run in this environment should show, working at the same speed or minor decreases in speed.
3. Should be in complete control of the system resources

Types of VM
Type 1 (Bare metal): VMM runs on bare metal directly controls the hardware; more prevalent in the industry, Example: VMWare ESX server, Xen
Type 2 (Hosted) : VMM runs as part of/on top of the host OS; used more in commodity devices, Example: Oracle Virtual box, VMWare workstation
Hybrid : Host VM runs directly on top of hardware, there also exists a hypervisor upon which guest VMs can be run

Criteria Type 1 Type 2


Virtualization Hardware virtualization OS virtualization
Operation Guest OS and applications run on the hypervisor Runs as an application on the host OS
Scalability Better scalability Not so much, reliance on the underlying OS
Setup/installation Simple, as long as you have the right hardware support Lot simpler setup, as you already have an OS
System independence Has direct access to hardware along with VM it hosts Not allowed to directly access the host hardware and its resources
Speed Faster Slower because of systems dependency
Performance Higher-performance because no middle layer Comparatively reduced performance
Security Highly securable Less secure, as any problem in the base OS affects the entire
system including the protected hypervisor

Paravirtualization and Transparent virtualization


Transparent virtualization: There is no modification to the guest OS i.e. guest OS is completely depended on the hypervisor for the translation of instructions to access the hardware
resources required by the applications running in it.
Its architecture intercepts and emulates privileged and sensitive instructions at runtime.
Also known as full virtualization, example: VMWare , KVM

Paravirtualization: Modification of the guest OS to run the hypervisor, the hypervisor provides APIs for the guest OS.
Useful if source code of the guest OS is modifiable, example: Linux, Xen, MVS
Guest OS in not completely isolated but is partially isolated by the VM from the VMM and the hardware. The guest OS is aware that it is running in a virtualized environment.
It leverages hypervisor API (hyper call): Special I/O APIs vs emulating hardware I/O , i.e. it has modified drivers to communicate with the hypervisor
Performance degradation is a major issue of a virtualized system. No one wants to use a VM if it's much slower than using a physical machine.
It attempts to reduce the virtualization overhead, thus improves performance by modifying only the guest OS kernel.

Applications or user processes are not trusted with executing privileged instructions, and will need to be run with privileges assigned by the OS.
Since OS manages the hardware, the instructions must be safely executed, to do so it uses a ring structure: Ring 0,1,2,3
Ring 0- offers the highest level of privileges as needed for the instruction, Ring 3- where applications run on minimal privileges.

Issues with paravirtualization:


Its compatibility and portability may be in doubt because it must support the unmodified OS as well.
The cost of maintaining para-virtualized OS is high, because they may require deep OS kernel modifications.
The performance advantage is relatively easy and more practical.

Trap, emulate and binary translation


To ensure isolation and protection of the hypervisor and VMs:
• Trap to the hypervisor when the VM tries to execute an instruction that could change the state of the system/take control
• Emulate execution of these instruction in hypervisor
• Direct execution of any other innocuous instructions on hardware that cannot impact other VMs or the hypervisor.

Generally 2 categories of instructions:


1. User instruction: Typically compute instructions
2. System instruction: Typically for system management

Two modes of CPU operation:


User mode: Ring 3 instructions
Privileged mode: Attempt to execute system instruction in user mode generates trap/general protection fault.

Cloud Computing Page 8


Privileged mode: Attempt to execute system instruction in user mode generates trap/general protection fault.

In transparent virtualization:
Hypervisor traps privileged instructions and emulated (Access to physical pages, Physical I/O devices, Control registers)
Handling privileges: All processors have rings of privilege, run the hypervisor in Ring 0 (highest privilege), run guest in lower ring

Limitations of trap and emulate:


State of the processor is privileged if:
Access to that state breaks the VMs isolation boundaries
It is needed by the monitor to implement virtualization

The monitor most know the state of the VM, it must also decide if emulation, translation or direct execution is allowed.
These add as overheads, and slow down the execution of the VM

Cloud Computing Page 9

You might also like