Professional Documents
Culture Documents
Ekaterina Elts
Scientific adviser: Assoc. Prof. A.V. Komolkin
Introduction
• Computational Grand
Challenge problems
• Parallel processing – the
method of having many
small tasks to solve one
large problem
• Two major trends :
MPPs - (massively parallel
processors) – but cost $$$ !
distributed computing
Introduction
• The hottest trend today
is PC clusters running
Linux
• Many Universities and
companies can afford
16 to 100 nodes.
• PVM and MPI are the
most used tools for
parallel programming
Contents
• Parallel Programming
A Parallel Machine Model
Cluster
A Parallel Programming Model
Message Passing Programming Paradigm
• PVM and MPI
Background
Definition
A Comparison of Features
• Conclusion
A Sequential Machine Model
A central processing
unit (CPU) executes a
program that performs
a sequence of read and
write operations on an
attached memory
The
von Neumann
computer
Interconnect
The cluster
A node can communicate with The
other nodes by sending and von Neumann
receiving messages over an computer
interconnection network
MIMD – Multiple Instruction Stream – Multiple Data Stream
A Parallel Programming
Model
input input
output
output
output S=s1+s2
output
print S print S
Sequential (serial) algorithm Parallel algorithm
A Parallel Programming
Model
• Message Passing
5
1 3
2 0
s2 ai bi slave
2
s1 ai bi
1
S=s1+s2 master
a, b
s3 ai bi
3
slave
PVM and MPI
Background
PVM MPI
The development of The development of MPI
PVM started in summer started in April 1992.
1989 at Oak Ridge MPI was designed by the
National Laboratory MPI Forum (a diverse
(ORNL). collection of implementors,
PVM was effort of a single library writers, and end
research group, allowing users) quite independently
it great flexibility in design of any specific
of this system implementation
MPI-1 MPI-2
master
What is Not Different?
• Computational speed
• Machine load
dynamic
• Network load
Heterogeneity: MPI
• Different datatypes can be encapsulated in
a single derived type, thereby allowing
communication of heterogeneous
messages. In addition, data can be sent
from one architecture to another with data
conversion in heterogeneous networks
(big-endian, little-endian).
Heterogeneity: PVM
• The PVM system supports heterogeneity
in terms of machines, networks, and
applications. With regard to message
passing, PVM permits messages
containing more than one datatype to be
exchanged between machines having
different data representations.
Process control
- Ability to start and stop tasks, to find out which
tasks are running, and possibly where they are
running.
A synchronous An asynchronous
communication does not communication completes as
complete until the message soon as the message is on
has been received. its way
Non-blocking operations
Broadcast
A broadcast sends a
message to a number of
recipients
Barrier Reduction
A barrier operation operations
synchronises a Reduction operations
number of reduce data from a
processors. number of processors
to a single item.
Fault Tolerance: MPI
• MPI standard is based on a static model
• If a member of a group failed for some
reason, the specification mandated that
rather than continuing which would lead to
unknown results in a doomed application,
the group is invalidated and the application
halted in a clean manner.
• In simple if something fails, everything
does.
Fault Tolerance: MPI
Fault Tolerance: MPI
Failed Node
Failed Node
Virtual Machine
Fault Tolerance: PVM
Virtual Machine
PVM MPI
Virtual machine concept No such abstraction
Simple message passing Rich message support
Communication topology Support logical communication
unspecified topologies
Interoperate across host Some realizations do not
architecture boundaries interoperate across architectural
boundaries
Portability over performance Performance over flexibility
Resource and process Primarily concerned with
control messaging
Robust fault tolerance More susceptible to faults
Conclusion
Each API has it’s unique strengths