You are on page 1of 6

CLOUD

COMPUTING
SEMESTER 5
UNIT - 3

HI COLLEGE
SYLLABUS
UNIT - 3

HI COLLEGE
PARALLEL VS. DISTRIBUTED COMPUTING
Parallel computing refers to the use of multiple
processors or computers to perform a task
simultaneously. It involves breaking down a complex
task into smaller subtasks that can be executed
simultaneously, thus reducing the time required to
complete the task. Think of it as having multiple
people working together on different parts of a
project at the same time, making the overall process
faster.

Distributed computing, on the other hand, involves


processing a task "distributed" across multiple
computers or processors in a network. Each computer
or processor works independently on its designated
portion of the task, exchanging information as
needed. It's like a team that is spread across different
locations, with each team member working on their
assigned task and collaborating with others through
communication.

ELEMENTS OF PARALLEL COMPUTING


1. Task decomposition: This involves breaking down a complex task into
smaller, independent units of work called tasks. Each task represents a portion
of the overall computation that can be executed concurrently.

2. Data decomposition: Parallel computing often requires dividing the data


associated with a task into smaller subsets. These subsets can be processed
simultaneously by different processors or computers. This is especially useful
when dealing with large datasets.

3. Synchronization: In parallel computing, different tasks or processes may


need to coordinate and synchronize their actions. Synchronization ensures that
tasks are properly coordinated, data is shared as needed, and conflicts are
resolved.

4. Communication: Parallel computing involves sharing data and information


between different processors or computers. Efficient communication
mechanisms are essential for tasks to exchange necessary data and coordinate
their actions.

HiCollege Click Here For More Notes 01


5. Load balancing: Load balancing is the distribution of tasks across multiple
processors or computers in a parallel computing system. It aims to evenly
distribute the workload to ensure that all resources are utilized effectively,
minimizing idle time and maximizing overall performance.

6. Fault tolerance: Parallel computing systems can encounter failures or errors.


Fault tolerance involves designing systems that can detect and recover from
failures, ensuring that the computation continues smoothly without significant
interruptions.

HARDWARE ARCHITECTURES FOR PARALLEL PROCESSING


1. Shared Memory Architecture: In this architecture, multiple processors or
cores share a common physical memory. All processors can access and modify
data stored in this shared memory, allowing them to work on different tasks
concurrently. This architecture requires synchronization mechanisms to ensure
data integrity and prevent conflicts.

2. Distributed Memory Architecture: In this architecture, each processor or core


has its own dedicated memory. Processors communicate and share data
through message passing, where messages are exchanged between different
processors. This architecture is commonly used in cluster computing or
massively parallel systems.

3. SIMD (Single Instruction, Multiple Data): SIMD architecture employs a single


control unit that can execute the same instruction on multiple data elements
simultaneously. It is suitable for tasks that involve repetitive or data-parallel
operations, such as image processing or simulations.

4. MIMD (Multiple Instruction, Multiple Data): MIMD architecture allows


multiple processors or cores to execute different instructions on different sets of
data simultaneously. Each processor operates independently, working on its
assigned task. MIMD architecture can be further classified into shared memory
MIMD and distributed memory MIMD based on the shared or distributed
memory design.

HiCollege Click Here For More Notes 02


5. GPU (Graphics Processing Unit): GPUs are specialized hardware architectures
designed for efficient parallel processing of graphical and computational tasks.
They consist of a large number of cores that work in parallel, making them
especially suitable for parallel processing in fields like data science, machine
learning, and scientific computing.

6. FPGA (Field Programmable Gate Array): FPGAs are flexible hardware


architectures that can be configured and programmed to perform specific
parallel processing tasks efficiently. They offer high parallelism and can be
reconfigured, allowing for customization based on the specific application
requirements.

APPROACHES TO PARALLEL PROGRAMMING


1. Shared Memory Programming: This approach involves writing code that
accesses and modifies shared data structures in the shared memory of multiple
processors or cores. It typically uses constructs like threads, locks, and barriers
to synchronize concurrent access to shared data.

2. Message Passing Programming: In this approach, different tasks or processes


communicate by exchanging messages explicitly. Each task operates
independently with its own data, and data communication occurs through
explicit send and receive operations. Popular message passing
libraries/frameworks include MPI (Message Passing Interface) and PVM (Parallel
Virtual Machine).

3. Data Parallel Programming: Data parallel programming involves dividing a


large data set into smaller chunks and applying the same operation
concurrently to each chunk. This approach is suitable for tasks that can be
divided into parallelizable subtasks, such as image processing or matrix
computations. Languages like CUDA and OpenCL provide frameworks for data
parallel programming on GPUs.

4. Task Parallel Programming: Task parallel programming is based on dividing


a complex task into smaller, independent tasks that can be executed
concurrently. Each task represents a portion of the overall computation, and
tasks can be dynamically assigned to available processors or cores. Task-based
frameworks like OpenMP and Intel TBB (Threading Building Blocks) simplify
task parallel programming.

HiCollege Click Here For More Notes 03


5. Functional Programming: Functional programming emphasizes
immutability and the absence of side effects, which facilitates parallel
execution. Languages like Haskell and Scala provide constructs for functional
programming and support parallelism through higher-order functions and
immutable data structures.

6. Hybrid Approaches: Many parallel programming approaches can be


combined to take advantage of different levels of parallelism. For example, a
program may use shared memory programming for intra-node parallelism and
message passing programming for inter-node communication in a distributed
computing system.

HiCollege ke notes ke aage


koi kuch bol sakta hai
kyaaaaaaaaaaaa

HiCollege Click Here For More Notes 04

You might also like