You are on page 1of 27

BCSE412L- Parallel Computing

Faculty Name: Dr. A. ILAVENDHAN


School of Computer Science and Engineering (SCOPE)
COURSE OBJECTIVES

• To introduce the fundamentals of parallel computing architectures and


paradigms.

• To understand the technologies, system architecture, and


communication architecture that has driven the growth of parallel
computing systems.

• To develop and execute basic parallel applications using


programming models and tools
COURSE OUTCOME
Students who complete this course successfully are expected to:

1. Comprehend the hardware and software organization of parallel computing


systems.

2. Design and implement Parallel algorithms.

3. Experiment with mechanisms such as client/server and P2P algorithms, remote


procedure calls (RPC/RMI).

4. Analyse the requirements for programming parallel systems and critically


evaluate the strengths and weaknesses of parallel programming models.

5. Analyse the efficiency of a parallel processing system and evaluate the types of
application for which parallel programming is useful
Motivation for Parallelism
• To address various challenges associated with increasing computational
demands and the limitations of traditional sequential processing.

The primary motivations for incorporating parallelism include:

Increased Computational Power:

As the complexity of computational problems grows, the


demand for increased processing power also rises.

Parallelism allows multiple processors to work together,


providing the potential for significant speedup in computations.
Motivation for Parallelism
• Performance Improvement:
Parallel processing enables the simultaneous execution of
multiple tasks, leading to improved overall system performance.

This is particularly important for applications that require rapid


execution, such as simulations, scientific calculations, and data
analytics.
Motivation for Parallelism
• Handling Large Datasets:

Many real-world problems involve large datasets that cannot be


efficiently processed by a single processor.

Parallel computing allows for the concurrent processing of data,


reducing the time required to analyze or manipulate extensive datasets.
Motivation for Parallelism
• Scientific and Engineering Simulations:

In fields like physics, chemistry, and engineering, simulations of


complex systems often involve solving complex mathematical
equations.

Parallel computing accelerates these simulations by distributing


the workload among multiple processors.
Motivation for Parallelism
• Time-Critical Applications:

Some applications have strict time constraints, such as real-time


systems, financial modeling, and multimedia processing.

Parallelism enables the execution of multiple tasks


simultaneously, ensuring timely completion of critical operations.
Motivation for Parallelism
• Energy Efficiency:

Parallel processing can contribute to energy efficiency by


allowing for the distribution of workloads among multiple processors.

This can be more power-efficient than running a single processor


at maximum capacity.
Motivation for Parallelism
• Cost-Effective Solutions:

Parallel computing can provide cost-effective solutions by


utilizing commodity hardware with multiple processors.

This approach can be more economical than investing in a single


high-performance processor.
Motivation for Parallelism
• Emerging Technologies:

The rise of parallel architectures, such as multi-core processors


and graphics processing units (GPUs), has made parallelism more
accessible.

Software development for parallel computing has become


increasingly important to harness the capabilities of these architectures.
Key Concepts of Parallelism:
• Parallelism is a fundamental concept in computer science and computing architecture
that involves the simultaneous execution of multiple tasks or processes to improve
overall system performance.

Here are some key concept of parallelism

Task Decomposition:

Breaking down a large problem into smaller, independent tasks that can be executed
concurrently.

Concurrency:

Simultaneous execution of multiple tasks, either at the same time or in overlapping


time intervals.
Key Concepts of Parallelism:
Parallelism:
The actual simultaneous execution of multiple tasks, which may involve
multiple processors, cores, or threads.

Data Parallelism:
Involves processing multiple data elements simultaneously.

Task Parallelism:
Concurrent execution of independent tasks or processes, where each task
can be executed in parallel.
Key Concepts of Parallelism:
Parallel Architectures:
Different configurations of hardware that support parallel processing,
including shared-memory and distributed-memory architectures.

Parallel Programming Models:


Frameworks and models that allow developers to express parallelism in
software.

Examples include multithreading, message passing, and task parallelism.


Key Concepts of Parallelism:
Synchronization:
Coordination of parallel tasks to ensure proper order of execution and
consistency of shared data.

Load Balancing:
Distributing the workload evenly among processors to maximize system
efficiency and prevent some processors from idling while others are
overloaded.

Scalability:
The ability of a parallel system to efficiently handle an increasing number
of processors or workload.
Challenges of Parallelism:
Synchronization Overhead:
Ensuring proper coordination among parallel tasks may introduce overhead,
as synchronization mechanisms like locks and barriers can impact performance.

Load Imbalance:
Uneven distribution of workload among processors can lead to inefficiencies,
where some processors are underutilized while others are overloaded.

Communication Overhead:
In distributed-memory systems, the communication between processors
introduces overhead. Minimizing communication overhead is crucial for optimal
performance.
Challenges of Parallelism:
Dependency Management:
Managing dependencies between tasks and ensuring that they execute in
the correct order without data inconsistencies.

Granularity Issues:
Determining the appropriate size of tasks (granularity) is challenging.
Fine-grained tasks may lead to overhead, while coarse-grained tasks may limit
parallelism.
Challenges of Parallelism:
Scalability Limits:

The diminishing returns on performance as the number of processors increases,


influenced by factors such as, communication overhead, and load balancing.

Programming Complexity:

Developing parallel software can be complex and error-prone. Dealing with


issues like race conditions and deadlocks requires careful programming.
Challenges of Parallelism:
Debugging and Profiling:
Debugging parallel programs can be challenging, and profiling tools are
needed to identify performance bottlenecks and optimize parallel code.

Heterogeneous Architectures:
The presence of diverse hardware architectures, such as CPUs, GPUs, and
accelerators, requires specialized programming approaches to harness their
parallel processing capabilities.
Overview of Parallel computing
Parallel computing is a type of computation in which many
calculations or processes are carried out simultaneously, with
the goal of solving a problem more quickly.
Types of parallel computing
There are several types of parallel computing architectures and models, each with its own
characteristics. Here are some common types:

Bit-level Parallelism:
• Involves processing multiple bits of data simultaneously.
• Primarily used in specialized processors and hardware.
Instruction-level Parallelism (ILP):
• Exploits parallelism at the instruction level.
• Pipelining and superscalar architectures are examples of ILP.
Data-level Parallelism:
• Focuses on dividing data into independent chunks for parallel processing.
• SIMD (Single Instruction, Multiple Data) and vector processing are examples.
Types of parallel computing
Task-level Parallelism:
• Divides a program into independent tasks that can be executed in parallel.
• Commonly used in parallel programming.

Thread-level Parallelism (TLP):


• Involves executing multiple threads simultaneously.
• Common in multi-threaded programming and multi-core processors.

Process-level Parallelism:
• Involves the simultaneous execution of multiple processes.
• Often used in distributed computing and clusters.
Types of parallel computing
Memory-level Parallelism (MLP):
• Exploits parallelism in accessing memory.
• Techniques like out-of-order execution.

Pipeline Parallelism:
• Divides a task into stages, and each stage is processed concurrently.
• Common in modern CPU architectures.

Cluster Computing:
• Connects multiple computers (nodes) to work together on a task.
• Often used in scientific research and large-scale data processing.
Types of parallel computing
SIMD (Single Instruction, Multiple Data):

• Executes the same instruction on multiple data elements simultaneously.

• Commonly used in graphics processing units (GPUs).

MIMD (Multiple Instruction, Multiple Data):

• Allows for multiple processors to execute different instructions on different data.

• Found in multi-core processors and distributed systems.


Applications of Parallel Computing
•One of the primary applications of parallel computing is Databases and Data
mining.

•The real-time simulation of systems is another use of parallel computing.

•The technologies, such as Networked videos and Multimedia.

•Science and Engineering.

•Collaborative work environments.

•The concept of parallel computing is used by augmented reality, advanced


graphics, and virtual reality.
Advantages of Parallel computing
•In parallel computing, more resources are used to complete the task that led to decrease
the time and cut possible costs. Also, cheap components are used to construct parallel
clusters.

•Comparing with Serial Computing, parallel computing can solve larger problems in a
short time.

•For simulating, modeling, and understanding complex, real-world phenomena, parallel


computing is much appropriate while comparing with serial computing.

•There are multiple problems that are very large and may impractical or impossible to
solve them on a single computer; the concept of parallel computing helps to remove these
kinds of issues.

•One of the best advantages of parallel computing is that it allows you to do several things
in a time by using multiple computing resources.
Limitation of Parallel computing
•It addresses such as communication and synchronization between multiple sub-tasks and
processes which is difficult to achieve.

•The algorithms must be managed in such a way that they can be handled in a parallel
mechanism.

•More technically skilled and expert programmers can code a parallelism-based program
well.

•The multi-core architectures consume high power consumption.

You might also like