You are on page 1of 3

High Performance Computing Using Parallel

Processing
Rahul .R , Abhishek Singh , Rahul Jain, Hardik Shah
Sir M Visvesvaraya Institute of Technology

Abstract – Parallel processing promises new era of


high performance computing which offers a very From each part is executed simultaneously
high speed processing of data in least amount of in individual computers.
time. By processing data in parallel manner and
individually assigning task to n processors, the II. MULTIPLE INSTRUCTION
speed of execution increases exponentially with MULTIPLE DATA
the value of n .We investigate optimum ways of
implementing parallel processing by introducing Every processor will be executing a
the usage of distributed network systems. different instruction stream. The
execution can be made synchronous or
I . INTRODUCTION asynchronous depending upon the
Parallel processing, the method of having many message passing model being used.
small tasks solve one large problem, has emerged Since synchronizing the execution has its
as a key enabling technology in modern own defects and challenges, its prudent to
computing. In the recent years the number of design a asynchronous instruction.
transistors in the microprocessors and other
hardware components have been drastically
increasing which has proven remarkably astute, but
on the other hand the cost of building such “high –
end” machineries has also increased. Thus parallel
processing is largely adopted for building both for
high-performance scientific computing and for
more ``general-purpose'' applications, which
demand for higher performance, lower cost, and
sustained productivity. There are many ways in
achieving parallel processing, the one most
economic and highly feasible is implemented
through “Distributed Computing “.The traditional
computing systems follow single instruction
single data format .In parallel computing systems
multiple instruction multiple data type is
followed , where “p” no of processors will execute
individual instructions. The given problem is
broken into discrete sets that can be solved
concurrently. The concurrent parts of the program
are broken into instructions and instructions
parallelized, maximum speedup = 2, meaning the code
III. PARALLELIZED EXECUTION OF will run twice as fast..
PROGRAM

In serial programming, the operation to be performed


on data structure must be explicitly mentioned and
also the type of execution like iterative loops,
recursion etc. By breaking down into the concurrent
parts of the program such repetition can be done
implicitly by assigning the tasks to “p” number of
processors. Unlike serial execution all the iteration
will run parallely, increasing the speed of execution
drastically.
According to Amdahl's Law: potential program
speedup is defined by the fraction of code (P) that
can be parallelized

1
speedup = --------
1 - P

If all of the code is parallelized, P = 1 and the speedup


is infinite (in theory). If 50% of the code can be

You might also like