y°-
CLOUD
COMPUTING
SEMESTER 5
UNIT -1SYLLABUS
UNIT
UNIT-1
No. of Hours: 11 Chapter/Book Reference: TBI [Chapters - 1, 10], TR2 [Chapters - 1,2]
Cloud Computing Overview ~Services of Internet, Origins of Cloud computing — Cloud
components ~ Essential characteristics - On-demand self-service, The vision of cloud computing
~ Characteristics, benefits, and Challenges ahead
cloud computing syllabus
~~CLOUD COMPUTING OVERVIEW
SERVICES OF INTERNET
Cloud computing refers to the delivery of computing services, including servers,
storage, databases, networking, software, analytics, and intelligence over the
internet. Essentially, cloud computing offers convenient, on-demand access toa
shared pool of computing resources that can be provisioned and released with
minimal effort.
The three primary services of cloud computing are Infrastructure as a Service
(laaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
* laaS provides basic computing resources like virtual machines, storage, and
networking to users. Essentially, it is a cloud-based alternative to the
traditional on-premises data centers. Examples of laaS providers include
‘Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.
PaaS is a cloud-based service that provides a platform where developers can
build and deploy custom applications. PaaS environments typically provide
tools and libraries for building applications and managing the deployment
process. Examples of PaaS providers include Heroku, Google App Engine,
and Salesforce.com's Force.com.
SaaS is a cloud-based service that hosts a complete software application
and makes it available to end-users over the internet. Essentially, users
access the full functionality of a software application without the need to
install or manage the software locally. Examples of SaaS applications
include Salesforce.com, Office 365, and Dropbox.
WHATIEITOLD YOU
ee
eT UL MATEO
¥)/SOMEONE,ELSE'S COMPUTER? aOrigins of Cloud computing
The origins of cloud computing date back to the 1960s when J.CR. Licklider, a
computer scientist at MIT, first proposed the concept of an “Intergalactic
Computer Network." This idea was further developed in the 1970s when
researchers at Xerox PARC created the first client-server computing
architecture.
In the late 1990s, the term “cloud computing" was introduced by Prof. Ramnath
Chellappa in his paper titled "Intermediaries in Cloud Computing: A New
Computing Paradigm". However, it wasn't until the early 2000s that cloud
computing began to take shape as a viable business model.
In 1999, Salesforce.com became one of the first companies to offer a Software as
a Service (SaaS) platform, which allowed businesses to access customer
relationship management (CRM) software over the internet. In 2002, Amazon
Web Services (AWS) launched as one of the first Infrastructure as a Service
(laas) providers.
In 2006, Amazon launched its Elastic Compute Cloud (EC2) service, which
enabled developers to launch virtual machines on the Amazon cloud. This was
a significant development as it allowed businesses to access computing
resources on an as-needed basis, without having to invest in expensive on-
premises infrastructure.
Today, cloud computing has become an integral part of the IT landscape, with
many businesses relying on cloud-based services for their computing needs.
‘Cloud computing has enabled businesses to become more agile and flexible,
while also reducing costs and increasing efficiency.
CLOUD COMPONENTS
Cloud computing typically consists of several components that work together
to deliver computing services to users. These components include:
1. Front-end platform: This is the user interface or access point that users
interact with to access the cloud services and applications. It could be a web
browser, mobile app, or other interface that allows users to send requests and
receive responses from the cloud provider.2. Backend infrastructure: This is the underlying computing infrastructure that
supports the cloud services and applications. It includes data storage, servers,
and networking hardware that provide the computing power and resources
necessary for delivering cloud services.
3. Cloud-based services: These are the services that cloud providers offer to
customers, such as storage, server processing, databases, and applications.
Cloud-based services can be delivered as Infrastructure as a Service (laaS),
Platform as a Service (PaaS), or Software as a Service (SaaS) models.
4. Cloud storage: Cloud storage allows users to store and access files and data
over the internet. Cloud storage providers offer various storage options,
including object, block, and file storage.
5, Cloud security: Cloud security is the set of measures taken to protect cloud-
based services, applications, and data from unauthorized access, theft, or
damage. Cloud security includes physical and virtual security measures, such as
encryption and access controls, to keep data safe.
6. Cloud scaling: Cloud scaling is the ability of cloud computing resources to
expand or contract as the workload demands vary. Cloud computing providers
allow users to scale their computing resources automatically or manually as
their business needs change.
Overall, cloud components work together to deliver reliable, scalable, and cost-
effective computing services to businesses and organizations of all sizes.
ESSENTIAL CHARACTERISTICS
The essential characteristics of cloud computing are defined by the National
Institute of Standards and Technology (NIST) and include:
1. On-demand self-service: Users can provision computing resources, such as
servers, storage, and networking, automatically without requiring any
interaction with the cloud provider.2. Broad network access: Cloud services are accessible over the internet using
standard protocols and devices, such as laptops, tablets, and smartphones.
3. Resource pooling: Cloud providers allocate computing resources dynamically
to multiple users, which can be adjusted according to demand.
4. Rapid elasticity: Cloud providers can rapidly scale computing resources up or
down to meet fluctuating demand.
5, Measured service: Cloud providers monitor resource usage and provide users
with detailed reports on their usage, allowing them to optimize their resource
allocation and usage.
These characteristics enable cloud computing to be highly flexible, scalable,
and cost-effective, offering businesses and organizations a range of benefits,
including increased efficiency, reduced costs, and improved accessibility to
computing resources and services.
ON-DEMAND SELF-SERVICE
On-demand self-service is one of the essential characteristics of cloud
computing. It refers to the ability of users to provision computing resources,
such as servers, storage, and networking, automatically without requiring any
interaction with the cloud provider.
With on-demand self-service, users can access cloud services and resources as
needed, without having to go through a manual request process or waiting for
human interaction or approval. This means that users can quickly and easily
provision the resources they need, and only pay for the resources they use,
rather than having to commit to long-term investments in infrastructure.
On-demand self-service is made possible by the automation and orchestration
technologies that cloud providers employ to manage and deliver cloud services.
These technologies enable cloud providers to offer a fully automated, self-
service platform that users can access through a user-friendly interface or API,
enabling quick and easy resource provisioning and allocation.THE VISION OF CLOUD COMPUTING
The vision of cloud computing is to provide scalable and on-demand
computing resources and services over the internet. This vision is made possible
by the characteristics of cloud computing, which include:
1. On-demand self-service
2. Broad network access
3. Resource pooling
4, Rapid elasticity
5. Measured service
The benefits of cloud computing include:
1. Increased efficiency and agility
2. Reduced capital and operational costs
3. Improved accessibility and availability of computing resources and services
4. Enhanced scalability and flexibility
However, there are also challenges ahead for cloud computing, such as:
1. Security and privacy concerns
2. Compatibility and interoperability issues
3. Performance and latency problems
4. Vendor lock-in and dependency
To overcome these challenges, cloud providers must continue to develop and
implement new technologies and best practices to ensure the reliability,
security, and privacy of their services. Additionally, cloud customers must
ensure that they have proper controls and policies in place to manage and
protect their data and assets in the cloud. Overall, the vision of cloud
computing holds great promise for businesses and organizations of all sizes, but
it requires careful consideration and management to fully realize the benefits
while minimizing risks and challenges.P°-
CLOUD
COMPUTING
SEMESTER 5
UNIT -2SYLLABUS
UNIT - 2
UNIT=11
1 Chapter/Book Reference: TBI [Chapter - 4], TH2 [Chapters - , 6, 17, 18],
No. of Hou
Cloud Computing Architecture-Introduction — Internet as a Platform, ‘The cloud reference model -
‘Types of clouds - Economics of the cloud, Computing platforms and technologies, Cloud computing
economics, Cloud infrastructure - Economics of private clouds - Software productivity in the cloud ~
Economies of scale: public vs. private cloudsCLOUD COMPUTING ARCHITECTURE-
INTRODUCTION
INTERNET AS A PLATFORM
Cloud computing architecture refers to the design and implementation of the
infrastructure and services that are offered by cloud providers. It includes the
hardware, software, networks, and management systems that support cloud
computing services.
One of the key aspects of cloud computing architecture is the use of the
internet as a platform. Cloud services are typically delivered over the internet,
and customers can access these services from anywhere in the world, as long as
they have an internet connection.
The use of the internet as a platform for cloud computing offers many benefits,
such as:
1. Accessibility: Cloud services can be accessed from anywhere in the world,
making them ideal for businesses with distributed teams or customers.
2. Scalability: The internet provides an almost infinite scale for cloud resources,
allowing cloud providers to quickly and easily add or remove computing
resources as demand for their services fluctuates.
3. Cost-effectiveness: Cloud providers can share computing resources across
many customers, reducing the cost of infrastructure and making it more
affordable for businesses of all sizes.
However, there are also challenges associated with using the internet as a
platform for cloud computing. These challenges include:
1. Network latency: The time it takes for data to travel over the internet can
cause delays in accessing cloud services, which can impact performance.
2. Security: The internet is prone to cyber threats, which makes it important for
cloud providers to implement strong security measures to protect customers’
data.
a3. Dependence on internet connectivity: Access to cloud services relies on
reliable and high-speed internet connectivity, which can be a challenge in
some regions or for some customers.
‘Overall, the use of the internet as a platform is a fundamental aspect of cloud
computing and enables the delivery of scalable, cost-effective, and accessible
computing services to businesses and individuals around the world.
THE CLOUD REFERENCE MODEL
The cloud reference model is a conceptual framework that describes the layers
and components of a cloud computing architecture. It provides a standardized
way of describing the various layers and components of a cloud computing
architecture, helping to ensure that different cloud providers are using a
consistent set of terminology when describing their services.
The cloud reference model consists of five distinct layers, each providing a
different set of services or functionalities. These layers are:
1. Cloud service user: This layer represents the end-users or clients who
consume cloud services and applications.
2. Cloud service provider: This layer represents the cloud provider who offers
computing resources and services to the cloud service user.
3. Cloud service integration: This layer represents the middleware services and
management tools that integrate and manage various cloud services.
4, Cloud service platform: This layer represents the infrastructure and platform
services that provide computing resources and tools to the cloud service
integration layer.
5, Cloud service infrastructure: This layer represents the physical infrastructure
(such as servers, storage, and networking resources) used to support the cloud
service platform layer.Each layer in the cloud reference model interacts with the layers above and
below it, creating a multilayered architecture. This architecture enables the
creation and delivery of cloud services that are scalable, reliable, and cost-
effective.
Overall, the cloud reference model provides a standard way of understanding
and describing the different layers and components of a cloud computing
architecture. This standardization helps to ensure interoperability and
compatibility between different cloud providers and services, making it easier
for organizations to adopt and integrate cloud technologies into their
operations.
TYPES OF CLOUDS
1. Public cloud: A public cloud is a cloud computing environment in which a
third-party cloud provider delivers computing resources and services over the
internet to multiple customers. The services provided by a public cloud are
available to any individual or organization that wants to use them, and the
resources provided are typically shared across multiple customers. Public
clouds are typically used by small to medium-sized businesses or for non-
sensitive applications where security requirements are not as stringent.
2. Private cloud: A private cloud is a cloud computing environment in which
computing resources and services are operated exclusively by a single
organization. Private clouds can be hosted either on-premises or ina third-party
data center and are typically used for highly sensitive, mission-critical
applications that require greater control and security than a public cloud can
provide.
3. Hybrid cloud: A hybrid cloud is a cloud computing environment that
combines public and private cloud resources to create a unified computing
environment. A hybrid cloud allows organizations to leverage the scalability
and cost-effectiveness of the public cloud while maintaining control over
sensitive data and applications in a private cloud environment. Hybrid clouds
are ideal for organizations that have fluctuating demand for computing
resources or need to meet specific regulatory or security requirements.1.ECONOMICS OF THE CLOUD
The economics of the cloud refer to the cost-saving benefits and efficiencies
that cloud computing can provide compared to traditional on-premises IT
infrastructure. There are several key economic advantages of the cloud,
including:
1. Reduced capital expenses: Moving to the cloud can reduce an organization's
capital expenditures (CapEx) by eliminating the need to purchase and maintain
expensive IT infrastructure and hardware. Instead, cloud providers offer pay-as-
you-go pricing models, allowing organizations to pay only for the resources they
use.
2. Lower operational expenses: The cloud can also help reduce an
organization's operational expenses (OpEx) by allowing them to scale resources
up or down as needed, without incurring additional costs for equipment or
maintenance.
3. Greater flexibility: Cloud computing provides greater flexibility than
traditional on-premises infrastructure. Organizations can quickly and easily spin
up new computing resources as needed to meet changing business demands,
without having to invest in additional hardware or infrastructure,
4, Access to new technologies: Cloud providers typically offer access to a wide
range of cutting-edge technologies, such as artificial intelligence, machine
learning, and blockchain. This enables organizations to innovate and stay
competitive without incurring the high costs associated with building and
maintaining these technologies on their own.2. COMPUTING PLATFORMS AND TECHNOLOGIES
Computing platforms and technologies refer to the hardware, software, and
tools used to create, manage, and deploy computing applications. Some of the
most common computing platforms and technologies include:
1. Operating systems: An operating system is a software component that
manages hardware resources and provides services for computer programs.
Popular operating systems include Windows, macOS, and Linux.
2. Programming languages: A programming language is a set of instructions
used to create software applications. Popular programming languages include
Python, Java, C++, and JavaScript.
3. Application development frameworks: Application development
frameworks provide a structure for developers to build, test, and deploy
applications. Common application development frameworks include Ruby on
Rails, Django, Nodejs, and .NET.
4, Cloud computing platforms: Cloud computing platforms provide computing
resources, such as servers, storage, and networking, over the internet. Popular
cloud computing platforms include Amazon Web Services (AWS), Microsoft
Azure, and Google Cloud Platform.
5. Virtualization: Virtualization technology allows multiple operating systems
and applications to run on a single physical machine, making more efficient use
of hardware resources. Popular virtualization platforms include VMware and
Hyper-V.
6. Containerization: Containerization technology allows developers to package
an application and its dependencies into a lightweight container that can be
deployed across different environments. Popular containerization platforms
include Docker and Kubernetes.
7. Artificial intelligence and machine learning: Artificial intelligence (Al) and
machine learning (ML) are computing technologies that enable systems to
learn, reason, and solve problems without human intervention. Popular Al and
ML platforms and tools include TensorFlow, PyTorch, and IBM Watson.Overall, computing platforms and technologies continue to evolve and expand,
with new tools and technologies emerging regularly to help developers build
and deploy more efficient and powerful applications.
3.CLOUD COMPUTING ECONOMICS
Cloud computing economics refer to the cost and financial benefits of using
cloud computing services as compared to traditional on-premises IT
infrastructure. The economic benefits of cloud computing are numerous,
including:
1. Reduced capital expenditure: With the cloud, organizations can reduce their
capital expenditure (CapEx) by eliminating the need to purchase and maintain
expensive IT infrastructure and hardware. Instead, they pay for only what they
need on a consumption-based pricing model.
2. Lower operational expenditure: Cloud computing also reduces operational
expenditure (OpEx) by offloading the maintenance and management of IT
infrastructure to cloud providers, reducing the need for on-site IT staff.
3. Increase in operational efficiency: The cloud provides elasticity and
scalability. enabling organizations to quickly and easily scale computing
resources up or down as per their requirements, which helps in faster
deployments, improved end-user experiences, and reducing waste.
4. Access to advanced technologies: Cloud providers often offer access to
advanced technologies, such as artificial intelligence (Al), machine learning
(ML), and the internet of things (IoT), which helps organizations in innovation
and digital transformation wave.
5. Reduce the time to market: Companies can leverage cloud’s global footprint
to rapidly enter new markets worldwide without the need for extensive
investments in on-premises technology, reducing the time to market and IT
complexity.
However, it's important to note that there are some costs associated with cloud
computing that organizations need to consider, such as data egress fees,
additional security needs, and compliance requirements. Also, a clear
understanding of the existing IT infrastructure and the applications’ suitability
for deployment in the cloud is crucial to ensure maximum returns.CLOUD INFRASTRUCTURE
Cloud infrastructure refers to the physical and virtual components that make up
cloud computing environments. It includes the hardware, software, networking,
storage, and other components required to provide cloud services. Cloud
infrastructure typically encompasses the following components:
1, Servers: In cloud computing environments, servers are used to run software
applications and to store data. Cloud providers typically use virtual servers that can
be quickly provisioned and decommissioned as per the customers’ need.
2. Storage: Cloud infrastructure often includes large-scale storage systems that can
store data in various forms, including databases, files, and object storage. Cloud
providers offer scalable storage solutions, including block storage, file storage, and
object storage.
3. Networking: Cloud infrastructure includes networking components that connect
various servers and storage systems to form a virtualized environment. Cloud
providers offer various networking services such as virtual private cloud (VPC),
content delivery network (CDN), and load balancing services to provide a highly
available and scalable network.
4, Data Centers: Cloud providers operate large data centers around the world to
ensure high availability and low latency. They are designed with redundant power,
cooling, and networking infrastructure to provide a resilient environment for their
customers.
5. Virtualization platform: Cloud infrastructure relies heavily on virtualization
technologies, such as hypervisors and containers, to manage and allocate
computing resources to multiple virtual machines. These platforms offer improved
resource utilization and flexibility, making it easier to manage cloud infrastructure.
6. Management and monitoring tools: Cloud providers offer management and
monitoring tools that help users monitor the performance of their applications and
infrastructure, manage security, and control spending.1.ECONOMICS OF PRIVATE CLOUDS
Private clouds are cloud computing environments dedicated to a single
organization. They provide many of the same benefits as public clouds,
including scalability, flexibility, and self-service capabilities, while providing
additional control over data and security. When it comes to the economics of
private clouds, there are several factors to consider:
1. Capital Expenditure: Building a private cloud requires significant capital
expenditure (CapEx) in terms of hardware, software, and infrastructure. This
includes purchasing servers, storage systems, network devices, and
virtualization software licenses. The organization will also need to invest in
building or leasing a data center facility or co-location facility specifically for the
private cloud.
2. Operating Expenditure: Once the private cloud is set up, there are ongoing
operating expenditures (OpEx) to manage the cloud infrastructure, including
‘ongoing hardware maintenance and upgrades, energy bills, and personnel
expenses.
3. Scalability: Private clouds offer scalability and the ability to add or remove
resources on-demand, which can help organizations reduce overall
infrastructure costs in the long run. However, due to the limited shared
resources, private clouds may not be as cost-effective as public clouds in terms
of scaling rapidly.
4. Customization and Control: Private clouds allow organizations to customize
and control the infrastructure and platform to meet the needs of specific
workloads, making it easy to ensure compliance with policies and regulations
while keeping tight security controls.
5. Return on Investment: Private clouds can provide a high return on
investment, especially for organizations that have high levels of in-house IT
infrastructure, complex workloads, and security-sensitive data. In such
scenarios, a private cloud can provide better control and visibility while
providing the same benefits and flexibility offered by a public cloud.2.SOFTWARE PRODUCTIVITY IN THE CLOUD
Software productivity in the cloud refers to the ability of software development
teams to develop, deploy, and iterate their software applications rapidly and
efficiently within a cloud computing environment. Cloud computing provides
software development teams with many tools and services that can enhance
their productivity, including:
1. Collaboration and Communication: Cloud-based tools make it easier for
teams to collaborate and communicate with each other in real-time.
Collaborative tools such as project management software and chat applications
facilitate better teamwork, task allocation, and project tracking.
2. Access to Resources and Tools: Cloud computing enables easy access to
development tools and resources across multiple devices, operating systems,
and locations. It eliminates the need for software installations, upgrades, and
maintenance, allowing teams to focus on application development.
3. Scalability: Cloud computing makes it easy for teams to scale their software
applications by adding more computational resources and testing them
without affecting application performance. Scaling can be achieved quickly
and cost-effectively by using cloud-based infrastructure, services, and platform-
as-a-service (PaaS) offerings.
4, Automation: Cloud-based services for testing, integration, and deployment
can automate many of the processes that are traditionally performed manually.
This automation increases efficiency, reduces the risk of errors, and allows
software developers to focus on creating code.
5. Flexibility: Cloud computing services allow development teams to
experiment with and iterate on software applications rapidly and efficiently,
without being constrained by the limitations of an on-premises infrastructure.
Cloud-based development tools and services enable teams to innovate and
respond quickly to changing business needs.
6. Security: Cloud service providers typically offer comprehensive security
controls, monitoring, and compliance features that protect against cyber
threats and facilitate compliance with industry regulations.ECONOMIES OF SCALE:
PUBLIC VS. PRIVATE CLOUDS
Economies of scale is a concept that refers to the savings in production costs
that result from producing larger quantities of a product or service. When it
comes to cloud computing, economies of scale can be achieved at various
levels, including hardware, software, and services.
Public clouds typically offer economies of scale due to the large scale,
centralized infrastructure, and shared hardware resources. Public cloud
providers can spread their costs across multiple customers, reducing the cost
per customer. In contrast, private clouds typically require greater upfront
investment in infrastructure, making it challenging to achieve economies of
scale unless the organization has a large IT environment with extensive
infrastructure.
Here are some ways in which public and private clouds offer economies of
scale:
1. Hardware: Public clouds typically have massive data centers, allowing them
to buy hardware in bulk. For example, public cloud providers can procure
servers, storage, and network equipment at a lower cost than a single
organization can. In contrast, private clouds require organizations to purchase
their own hardware infrastructure, making it challenging to achieve economies
of scale.
2. Software: Public clouds typically have standard software and platforms that
can be used across multiple customers, allowing providers to spread the cost of
software licenses across many customers. This reduces the cost per customer,
making it more affordable for organizations to access the latest software
features and capabilities. In contrast, private clouds require organizations to
purchase their own software licenses, making it challenging to achieve
economies of scale.
Private cloud Public cloud shared
by multiole comoanies
a al
OS foot.3. Services: Public clouds offer a wide range of services, including compute,
storage, and analytics, which they can provide at a lower cost than individual
organizations due to economies of scale. Public clouds can also offer services
that are not financially viable for individual organizations, such as artificial
intelligence and machine learning. In contrast, private clouds typically require
organizations to manage their own services, which can result in higher costs
and lower productivity due to the need for redundant and unused resources.
Overall, public clouds offer greater economies of scale than private clouds due
to their ability to spread the cost of infrastructure, software, and services across
multiple customers. However, private clouds offer greater control and
customization, making them more viable for organizations with specialized
needs or regulatory requirements.
TP ACOA AN OT nr Ge OMICSa
CLOUD
COMPUTING
SEMESTER 5
UNIT -3SYLLABUS
UNIT -3
UNIT= IL
No. of Hours: 11 Chapter/Book Reference: TBI [Chapter - 2}, TB2 [Chapter - 11]
Principles of Parallel and Distributed Computing: Parallel vs. distributed computing - Elements of
parallel computing - Hardware architectures for parallel processing, Approaches to parallel
programming - Laws of caution.PARALLEL VS. DISTRIBUTED COMPUTING
Parallel computing refers to the use of multiple
processors or computers to perform a task
simultaneously. It involves breaking down a complex
task into smaller subtasks that can be executed
simultaneously, thus reducing the time required to
complete the task. Think of it as having multiple
people working together on different parts of a
project at the same time, making the overall process
faster.
Distributed computing, on the other hand, involves
processing a task distributed" across multiple
computers or processors in a network. Each computer
or processor works independently on its designated
portion of the task, exchanging information as
needed. It's like a team that is spread across different
locations, with each team member working on their
assigned task and collaborating with others through
communication.
ELEMENTS OF PARALLEL COMPUTING
1. Task decomposition: This involves breaking down a complex task into
smaller, independent units of work called tasks. Each task represents a portion
of the overall computation that can be executed concurrently.
2. Data decomposition: Parallel computing often requires dividing the data
associated with a task into smaller subsets. These subsets can be processed
simultaneously by different processors or computers. This is especially useful
when dealing with large datasets.
3. Synchronization: In parallel computing, different tasks or processes may
need to coordinate and synchronize their actions. Synchronization ensures that
tasks are properly coordinated, data is shared as needed, and conflicts are
resolved.
4, Communication: Parallel computing Involves sharing data and information
between different processors or computers. Efficient communication
mechanisms are essential for tasks to exchanae necessarv data and coordinate5. Load balancing: Load balancing is the distribution of tasks across multiple
processors or computers in a parallel computing system. It aims to evenly
distribute the workload to ensure that all resources are utilized effectively,
minimizing idle time and maximizing overall performance.
6, Fault tolerance: Parallel computing systems can encounter failures or errors.
Fault tolerance involves designing systems that can detect and recover from
failures, ensuring that the computation continues smoothly without significant
interruptions.
HARDWARE ARCHITECTURES FOR PARALLEL PROCESSING
1. Shared Memory Architecture: In this architecture, multiple processors or
cores share a common physical memory. All processors can access and modify
data stored in this shared memory, allowing them to work on different tasks
concurrently. This architecture requires synchronization mechanisms to ensure
data integrity and prevent conflicts.
2. Distributed Memory Architecture: In this architecture, each processor or core
has its own dedicated memory. Processors communicate and share data
through message passing, where messages are exchanged between different
processors. This architecture is commonly used in cluster computing or
massively parallel systems.
3. SIMD (Single Instruction, Multiple Data): SIMD architecture employs a single
control unit that can execute the same instruction on multiple data elements
simultaneously. It is suitable for tasks that involve repetitive or data-parallel
operations, such as image processing or simulations.
4, MIMD (Multiple Instruction, Multiple Data): MIMD architecture allows
multiple processors or cores to execute different instructions on different sets of
data simultaneously. Each processor operates independently, working on its
assigned task. MIMD architecture can be further classified into shared memory
MIMD and distributed memory MIMD based on the shared or distributed
memory design.5. GPU (Graphics Processing Unit): GPUs are specialized hardware architectures
designed for efficient parallel processing of graphical and computational tasks.
They consist of a large number of cores that work in parallel, making them
especially suitable for parallel processing in fields like data science, machine
learning, and scientific computing.
6. FPGA (Field Programmable Gate Array): FPGAs are flexible hardware
architectures that can be configured and programmed to perform specific
parallel processing tasks efficiently. They offer high parallelism and can be
reconfigured, allowing for customization based on the specific application
requirements.
APPROACHES TO PARALLEL PROGRAMMING
1. Shared Memory Programming: This approach involves writing code that
accesses and modifies shared data structures in the shared memory of multiple
processors or cores. It typically uses constructs like threads, locks, and barriers
to synchronize concurrent access to shared data.
2. Message Passing Programming: In this approach, different tasks or processes
communicate by exchanging messages explicitly. Each task operates
independently with its own data, and data communication occurs through
explicit send and receive operations. Popular message passing
libraries/frameworks include MPI (Message Passing Interface) and PVM (Parallel
Virtual Machine).
3. Data Parallel Programming: Data parallel! programming involves dividing a
large data set into smaller chunks and applying the same operation
concurrently to each chunk. This approach is suitable for tasks that can be
divided into parallelizable subtasks, such as image processing or matrix
computations. Languages like CUDA and OpenCL provide frameworks for data
parallel programming on GPUs.
4. Task Parallel Programming: Task parallel programming is based on dividing
a complex task into smaller, independent tasks that can be executed
concurrently. Each task represents a portion of the overall computation, and
tasks can be dynamically assigned to available processors or cores. Task-based
frameworks like OpenMP and Intel TBB (Threading Building Blocks) simplify
task parallel programming.5. Functional Programming: Functional programming emphasizes
immutability and the absence of side effects, which facilitates parallel
execution. Languages like Haskell and Scala provide constructs for functional
programming and support parallelism through higher-order functions and
immutable data structures.
6. Hybrid Approaches: Many parallel programming approaches can be
combined to take advantage of different levels of parallelism. For example. a
program may use shared memory programming for intra-node parallelism and
message passing programming for inter-node communication in a distributed
computing system.y*:
CLOUD
COMPUTING
SEMESTER 5
UNIT - 4SYLLABUS
UNIT - 4
UNIT-1V)
No.of Hours: 11 Chapter/Book Reference: TBI [Chapter - 3], TB2 [Chapter -8]
Virtwalization’ Introduction - Characteristics of virtualized environments - Taxonomy of
virtualization techniques - Virtualization and cloud computing - Pros and cons of virtualization
Technology example: VMware: full virualization, Types of hardware virtualization’ Full
virtualization - partial virtualization - para virtualizationVIRTUALIZATION @
INTRODUCTION
Virtualization is the process of creating a virtual (rather than actual) version of
something, including virtual computer systems, storage devices, and networks.
Virtualization allows multiple virtual environments to run on a single physical
machine, known as a host, as if they were running on separate physical
machines.
The concept of virtualization has been around for several decades, but it has
gained significant popularity in recent years due to the increasing demand for
more efficient use of computing resources.
CHARACTERISTICS OF VIRTUALIZED ENVIRONMENTS
Virtualized environments, also known as virtual environments, have several
unique characteristics that differentiate them from traditional physical
environments. Here are some of the key characteristics:
1. Virtualization layer: Virtualized environments are built on a virtualization layer
that sits between the physical hardware and the operating system. This layer
enables multiple virtual machines to run on a single physical machine, and
provides a level of abstraction between the hardware and the software.
2. Resource pooling: Virtualization allows resources such as CPU, memory,
storage, and networking to be pooled and shared among multiple virtual
machines. This helps to optimize resource utilization and reduces waste.
3. Isolation: Each virtual machine is isolated from other virtual machines and
the physical hardware, which provides a level of security and prevents resource
contention.
4, Mobility: Virtual machines can be easily moved between physical machines,
which allows for greater flexibility in managing resources and enables
organizations to quickly adapt to changing business needs.
5, Scalability: Virtualized environments can be easily scaled up or down to meet
changing resource requirements, which helps to optimize resource utilization
and reduce costs.aS iv
6. Simplified management: Virtualization provides a centralized management
console that enables administrators to manage multiple virtual machines from
a single location, which simplifies management tasks and reduces the need for
manual intervention.
7. Improved disaster recovery: Virtualized environments make it easy to create
backup copies of virtual machines, which can be quickly restored in the event
of a disaster or outage. This helps to minimize downtime and reduce the risk of
data loss.
8. Enhanced performance: Virtualization technologies provide advanced
features such as live migration, storage thin provisioning, and network
optimization that help to improve performance and reduce resource waste.
TAXONOMY OF VIRTUALIZATION TECHNIQUES
Virtualization is a broad concept that encompasses various techniques and
technologies for creating virtual environments. Here is a taxonomy of some of
the most common virtualization techniques:
1. Server Virtualization: This technique involves creating multiple virtual servers
on a single physical server, which allows for more efficient use of computing
resources and reduces the need for expensive hardware.
2. Desktop Virtualization: This technique involves creating multiple virtual
desktops on a single physical machine, which allows users to access their
desktop environments from anywhere with an internet connection.
3. Storage Virtualization: This technique involves combining physical storage
devices into a single logical storage device, which provides greater flexibility in
managing storage resources and enables organizations to optimize storage
utilization.
4. Network Virtualization: This technique involves creating multiple virtual
networks on a single physical network, which allows for greater flexibility in
managing network resources and enables organizations to optimize network
utilization.5. Application Virtualization: This technique involves creating a virtual
environment for an application, which enables it to run on any operating
system without the need for installation or configuration changes.
6. Database Virtualization: This technique involves creating a_ virtual
environment for a database, which enables it to be easily moved between
physical machines and provides greater flexibility in managing database
resources.
7. Hypervisor-based Virtualization: This technique involves using a hypervisor,
which is a layer of software that sits between the operating system and the
hardware, to create multiple virtual environments on a single physical machine.
8. Container-based Virtualization: This technique involves using containers,
which are lightweight virtual environments that share the underlying operating
system kernel, to create multiple isolated environments on a single physical
machine.
9. Cloud Virtualization: This technique involves using cloud computing
technologies to create virtual environments that can be easily scaled up or
down to meet changing resource requirements, and provides greater flexibility
in managing computing resources.
VIRTUALIZATION AND CLOUD COMPUTING
Virtualization and cloud computing work together in a number of ways. For
example, a cloud provider might use virtualization to create virtual machines
on its physical servers. These virtual machines can then be rented to customers
as part of a cloud service.
Here are some of the ways that virtualization and cloud computing are used
together:
+ laaS: laaS provides customers with virtual machines, storage, and
networking resources that they can use to run their own applications.
PaaS: PaaS provides customers with a platform on which to develop, deploy,
and run their applications, The platform includes the operating system,
middleware, and development tools.
SaaS: SaaS provides customers with access to software applications over the
internet. The applications are hosted by the cloud provider and are accessed
by customers through a web browser. aPROS AND CONS OF VIRTUALIZATION PROS:
pros
* Reduced IT costs: Virtualization can help you save money on hardware,
software, and power costs. By running multiple virtual machines on a single
physical machine, you can reduce the number of physical servers you need.
This can save you money on hardware costs, as well as software licensing
costs. Additionally, virtualization can help you reduce your power
consumption, as you'll be using fewer physical servers.
Improved agility: Virtual machines can be easily provisioned and destroyed,
which makes it easier to deploy and scale applications. This can be a major
advantage for businesses that need to be able to quickly respond to
changes in demand.
Enhanced disaster recovery: Virtual machines can be backed up and
restored quickly, which can help you recover from disasters more quickly.
This is because you can simply restore the virtual machine to a previous
state, rather than having to rebuild the entire physical server.
Increased server utilization: Virtualization can help you increase the
utilization of your physical servers. This is because you can run multiple
virtual machines on a single physical server, which means that the server is
not sitting idle.
cons
* Complexity: Virtualization can add complexity to your IT environment. This
is because you need to manage the hypervisor software, as well as the
virtual machines themselves. This can be a challenge for small businesses
that do not have a lot of IT staff.
Performance overhead: There can be a performance overhead associated
with virtualization. This is because the hypervisor software adds an
additional layer of abstraction between the physical hardware and the
virtual machines. This can lead to a slight decrease in performance.
Security risks: Virtualization can introduce new security risks. This is
because you are essentially creating multiple independent systems on a
single physical server. This can make it more difficult to secure your
environment.
Vendor lock-in: Some virtualization software is proprietary, which can lock
you into a particular vendor. This can make it difficult to switch to a different
virtualization platform in the future.