You are on page 1of 19

CHAPTER SIX

PARALLEL ALGORITHM
Parallel Algorithm
• An algorithm is a sequence of instructions followed to solve a problem.

• While designing an algorithm, we should consider the architecture of


computer on which the algorithm will be executed.

• As per the architecture, there are two types of computers:

 Sequential Computer

 Parallel Computer

 Sequential Algorithm: An algorithm in which some consecutive steps of


instructions are executed in a chronological order to solve a problem.
Cont.…
Parallel Algorithm: The problem is divided into sub-problems and are
executed in parallel to get individual outputs. Later on, these individual
outputs are combined together to get the final desired output.

• It is not easy to divide a large problem into sub-problems.

• Sub-problems may have data dependency among them.

• It has been found that the time needed by the processors in


communicating with each other is more than the actual processing
time.

• So, while designing a parallel algorithm, proper CPU utilization should


be considered to get an efficient algorithm.
Parallelism
• Parallelism is the process of processing several set of instructions
simultaneously. It reduces the total computational time.

• Parallelism can be implemented by using parallel computers, i.e. a computer


with many processors.

• Parallel computers require parallel algorithm, programming languages,


compilers and operating system that support multitasking.
Model of Computation
• To design an algorithm properly, we must have a clear idea of the basic model of
computation in a parallel computer.

• Both sequential and parallel computers operate on a set (stream) of instructions


called algorithms.

• These set of instructions (algorithm) instruct the computer about what it has to do in
each step.

• Depending on the instruction and data stream, computers can be classified into four
categories:
 Single Instruction, Single Data stream (SISD) computers

 Single Instruction, Multiple Data stream (SIMD) computers

 Multiple Instruction, Single Data stream (MISD) computers


Cont.…
 SISD Computers

• SISD computers contain one control unit, one processing unit, and one memory unit.

• In this type of computers, the processor receives a single stream of instructions


from the control unit and operates on a single stream of data from the memory unit.

 SIMD Computers

• SIMD computers contain one control unit, multiple processing units, and shared
memory or interconnection network.

• Here, one single control unit sends instructions to all processing units. During
computation, at each step, all the processors receive a single set of instructions from
the control unit and operate on different set of data from the memory unit.
Cont.….
 MISD Computers

• As the name suggests, MISD computers contain multiple control units, multiple
processing units, and one common memory unit.

• Here, each processor has its own control unit and they share a common memory unit.

• All the processors get instructions individually from their own control unit and
they operate on a single stream of data as per the instructions they have received
from their respective control units. This processor operates simultaneously.
Cont.…
MIMD Computers

• MIMD computers have multiple control units, multiple processing units, and a
shared memory or interconnection network.

• Here, each processor has its own control unit, local memory unit, and arithmetic and
logic unit.

• They receive different sets of instructions from their respective control units and
operate on different sets of data.
ANALYSIS OF PARALLEL
ALGORITHM
• Analysis of an algorithm helps us determine whether the algorithm is useful or not.

• Generally, an algorithm is analyzed based on its execution time (Time Complexity)


and the amount of space (Space Complexity) it requires.

• Parallel algorithms are designed to improve the computation speed of a computer.

• For analyzing a Parallel Algorithm, we normally consider the following parameters:

 Time complexity (Execution Time),

 Total number of processors used, and

 Total cost.
Cont.…
• Time complexity of an algorithm can be classified into three
categories:
 Worst-case complexity

 Average-case complexity

 Best-case complexity

• Asymptotic notation is the easiest way to describe the fastest and slowest possible
execution time for an algorithm using high bounds and low bounds on speed.

• For this, we use the following notations:


 Big O notation

 Omega notation

 Theta notation
Parallel Algorithmic Techniques
• As in sequential algorithm design, in parallel algorithm design there are many
general techniques that can be used across a variety of problem areas.

• Some of these are variants of standard sequential techniques, while others are new to
parallel algorithms.

• In this section we introduce some of these techniques, including

 Parallel divide-and-conquer

 Randomization and

 Parallel pointer manipulation.


Parallel Divide-and-conquer
• A divide-and-conquer algorithm splits the problem to be solved into sub-problems that

are easier to solve than the original problem, solves the sub-problems, and merges the
solutions to the sub-problems to construct a solution to the original problem.

• The divide-and-conquer paradigm improves program modularity, and often leads to

simple and efficient algorithms.

• It has therefore proven to be a powerful tool for sequential algorithm designers.

• Divide-and-conquer plays an even more prominent role in parallel algorithm design.

• Because the sub-problems created in the first step are typically independent, they can

be solved in parallel.

• Often the sub-problems are solved recursively and thus the next divide step yields even

more sub-problems to be solved in parallel.


Randomization
• Random numbers are used in parallel algorithms to ensure that processors can make local

decisions which, with high probability, add up to good global decisions.

• Here we consider three uses of randomness.

 Sampling: It is to select a representative sample from a set of elements. Often, a problem

can be solved by selecting a sample, solving the problem on that sample, and then using the

solution for the sample to guide the solution for the original set.

• For example, suppose we want to sort a collection of integer keys. This can be accomplished

by partitioning the keys into buckets and then sorting within each bucket.

• Random sampling is used to determine the boundaries of the intervals. First each processor

selects a random sample of its keys. Next all of the selected keys are sorted together.

• Finally these keys are used as the boundaries. Such random sampling is also used in many
Cont.…
Symmetry breaking: Another use of randomness is in symmetry breaking.

 For example, consider the problem of selecting a large independent set of vertices in
a graph in parallel.

 A set of vertices is independent if no two are neighbors.

 Generally the process of selecting only one processor for one instruction.

Load balancing: One way to quickly partition a large number of data items into a
collection of approximately evenly sized subsets is to randomly assign each element
to a subset.

 This technique works best when the average size of a subset is at least logarithmic in
the size of the original set.
Parallel pointer Techniques
• These techniques can often be replaced by parallel techniques with roughly the same
power.

Pointer jumping: One of the oldest parallel pointer techniques is pointer jumping.

• This technique can be applied to either lists or trees.

• In each pointer jumping step, each node in parallel replaces its pointer with that of its
successor (or parent).

• For example, one way to label each node of an n-node list (or tree) with the label of
the last node (or root) is to use pointer jumping.

• After at most ⌈log n⌉ steps, every node points to the same node, the end of the list
( root of the tree).
Cont.…
 Euler tour: An Euler tour of a directed graph is a path through the graph in which
every edge is traversed exactly once.

• In an undirected graph each edge is typically replaced with two oppositely directed
edges.

• The Euler tour of an undirected tree follows the perimeter of the tree visiting each
edge twice, once on the way down and once on the way up.

• By keeping a linked structure that represents the Euler tour of a tree it is possible to
compute many functions on the tree, such as the size of each subtree.

• This technique uses linear work, and parallel depth that is independent of the depth of
the tree.

• The Euler tour can often be used to replace a standard traversal of a tree, such as a
Cont.…
 Graph contraction: It is an operation in which a graph is reduced in size while
maintaining some of its original structure.

• Typically, after performing a graph contraction operation, the problem is solved


recursively on the contracted graph.

• The solution to the problem on the contracted graph is then used to form the final
solution.

• For example, one way to partition a graph into its connected components is to first
contract the graph by merging some of the vertices with neighboring vertices, then
find the connected components of the contracted graph, and finally undo the
contraction operation.

• Many problems can be solved by contracting trees, in which case the technique is
called tree contraction.
Cont.…
 Ear decomposition: An ear decomposition of a graph is a partition of its edges into
an ordered collection of paths.

• The first path is a cycle, and the others are called ears.

• The end-points of each ear are anchored on previous paths.

• Once an ear decomposition of a graph is found, it is not difficult to determine if two


edges lie on a common cycle.

• This information can be used in algorithms for determining biconnectivity,


triconnectivity, 4-connectivity, and planarity.

• An ear decomposition can be found in parallel using linear work and logarithmic
depth, independent of the structure of the graph.
ou! ! !
nk Y
Tha

You might also like