Professional Documents
Culture Documents
ASSIGNMENT 1
b. Message passing
Sending and receiving message between two tasks:
Sending:
call pvm_initsend(); This clears the default send buffer and specifies the message encoding.
After initialization, the sending task must then pack all of the data it wishes to send into the
sending buffer; pvm_pack() is used.
pvm_pack(); family of functions. pvm_packf() is a printf-like function for packing multiple
types of data. After the data have been packed into the sending buffer, the message is ready to
be sent.
pvm_send(): info=pvm_send(tid, msgtag) will send the data in the sending buffer to the process
with the task id of tid. It tags the message with the integer value msgtag.
A message tag is useful for telling the receiving task what kind of data it is receiving. For
example, a message tag of 5 might mean add the numbers in the message, while a tag of 10
might mean multiply them. pvm_mcast() is a similar function. It does the same thing as
pvm_send(), except it takes an array of tids instead of just one. This is useful when you want
to send the same message to a set of tasks.
Receiving:
receiving task makes a call to pvm_recv() to receive a message. bufid=pvm_recv(tid, msgtag) will wait
for a message from task tid with tag msgtag to arrive, and will receive it when it does
pvm_nrecv() can also be useful. This does a non-blocking receive--if there is a suitable message it is
received, but if there isn't, the task does not wait.
pvm_probe() can be helpful as well. This will tell if a message has arrived, but takes no further
action.
When a task has received a message, it must unpack the data from the receiving buffer. pvm_unpack()
accomplishes this in the same manner that pvm_pack() uses to pack the data in.
3. Discuss relative merits and demerits of various laws for measuring speed up performances vis a
vis to a parallel algorithm system
Amdahl’s Law:- states that the speedup of a parallel algorithm is effectively limited by
the number of operations which must be performed sequentially, i.e its Serial Fraction
What is speed up?
The speed up factor help us in knowing the relative gain achieved in shifting the execution of
a task from sequential computer to parallel computer and the performance does not increase
linearly with the increase in number of processor.
Illustration:
Let us consider a problem say P, which has to be solved using a parallel computer. According
to Amdahl’s law, there are mainly two types of operations; therefore, the problem will have
some sequential operation and some parallel operations. We already know that it requires T
(1) amount of time to execute a problem using a sequential machine and sequential
algorithm. The time to compute the sequential operation is a fraction α (α<=1) of the total
execution time i.e. T (1) and the time to computer the parallel operations is (1- α), therefore S
(N) can be calculated as under:-
Dividing by T (1)
One major shortcoming identified in Amdahl’s law: according to Amdahl’s law the
problem size is always fixed and of sequential operations remains mainly same.
The Sun and Ni’s Law is a generalization of Amdahl’s Law as well as Custafson’s Law. The
fundamental concept of underlying the Sunand Ni’s Law is to find the solution to a problem
with a maximum size along with limited requirement of memory. Now a day, there are many
applications which are bounded by the memory in contrast to the processing speed.
In a multiple based parallel computer, each processor has an independent small memory. In
order to solve a problem, normally the problem is divided into sub problems and distributed
to various processors. It may be noted that size of sub-problem should be in proportion with
size of the independent local memory available with the processor. The size of the problem
can be increased further such that the memory could be utilized. This technique assists in
generating more accurate solution as the problem size has been increased.
Granularity; This typically refers to the ratio of the number of bytes received by a process to the number of
floating point operations it does. Increasing the granularity will speed up the application, but the tradeoff is a
reduction in the available parallelism. It iis divided into; five grained and coarse grained system.
In five grained system parallel parts are relatively small and that means high communication overhead. In
coarse grain system parallel parts are relatively large, that mean more computation and less computation.
If granularity is too fine, it is possible that the overhead required for the communication and synchronization
between task takes longer than the computation. On the other hand, in coarse gain parallel system, relatively
large amount of computation work is done. They have high computation work to communication ration and
imply more opportunity for performance increase.
1. Computer Science and Mathematics Division. An Introduction to PVM Programming. Retrieved from
website http://www.csm.ornl.gov/pvm/intro.html
2. In sung Park, Michael J et al. Voss. Parallel Programming Environment for OpenMP
https://engineering.purdue.edu/paramnt/publications/ompenv.pdf
3. Cardiff School of Computer Science & Informatics. Factors That Limit Speedup. Retrieved from
http://www.cs.cf.ac.uk/Parallel/Year2/section7.html