You are on page 1of 10

Introduction to Parallel Computing

What is parallel computing?


CS 480 – II
Parallel and Scientific Computing
Parallel Computing Web Site
• http://www.llnl.gov/computing/tutorials/parallel_comp/

Message Passing Interface Web Sites


• http://www-unix.mcs.anl.gov/mpi/tutorial/
• http://www.lam-mpi.org/tutorials/
• http://www.nas.nasa.gov/Groups/SciCon/Tutorials/MPI
intro/
The Computer – Von Neumann
• The CPU
• Memory
• Communication
• Architecture
• Hardware
Programming the Computer
• Serial Instructions
• Threads
• Parallel Computing
Parallel Computer Architecture
• SISD - Single Instruction Single Data
• SIMD - Single Instruction Multiple Data
• MISD - Multiple Instruction Single Data
• MIMD - Multiple Instruction Multiple Data
Parallel Computing Terminology
• Task: Serial, Parallel
• Execution: Serial,Parallel
• Memory: Shared,Distributed
• Communication,Synchronization
• Granularity: Ratio of Computation to Communication
• Speedup: Serial Execution Time vs Parallel Execution Time
• Scalability
• Latency
• Bandwidth
• Beowulf
Algorithms
• Fibonacci Sequence Fn1  Fn  Fn1
I
• Average S   f (i) S  SI
i 0

• Pixel Images
• Scientific Computing
X 2[ j ]  A *( X 1[ j  1]  2 X 1[ j ]  X 1[ j  1])  X 0[ j ]
Amdahl’s Law
• Speedup = 1/(1-P); P is parallel fraction of code
• Speedup = 1/(P/N+S); S is serial fraction of code
• Example
N P = 0.5 P = 0.99
10 1.82 9.17
100 1.98 50.25
1000 1.99 90.99
MPI
• Software: Fortran 90, HPF, c
• Standard
• Simple to use
• Six Commands
1) MPI_Init (Initialize MPI)
2) MPI_Comm_size (Determine the number of processors)
3) MPI_Comm_rank (Which process am I?)
4) MPI_Send (Send a message)
5) MPI_Recv (Receive a message)
6) MPI_Finalize (Sayonara)
Sample Program
• Determine the average of n numbers stored in
array f.
• Assignment 1
(1) Go to websites and look at sample MPI Codes
(2) Convert code to c
(3) Look up trapezoidal rule and write
a parallel algorithm to compute
1 1
  4 dx  4 arctan( x ) |0  4 arctan(1)
1
0 1  x2

You might also like