You are on page 1of 4

Reg No: 19BCE2028

Name: Pratham Shah


Slot: L47+L48

EX 5 ( MPI-II)

SCENARIO – I

Implement a MPI program to demonstrate a simple MPI broadcast.

MPI is a message passing interface system which facilitates the processors which have
distributed memory architecture to communicate and send & receive messages.

Use MPI_Bcast(buffer,count,MPI_INT,source,MPI_COMM_WORLD) method .

ALGORITHM:

The subroutine MPI_Bcast sends a message from one process to all processes in a
communicator.

int MPI_Bcast(void *data_to_be_sent, int send_count, MPI_Datatype send_type,

int broadcasting_process_ID, MPI_Comm comm);

When processes are ready to share information with other processes as part of a
broadcast, ALL of them must execute a call to MPI_BCAST. There is no separate MPI
call to receive a broadcast.

SOURCE CODE:
Reg No: 19BCE2028
Name: Pratham Shah
Slot: L47+L48

OUTPUT SCREEN SHOT:

RESULTS: The broadcast functionality of MPI was implemented.

SCENARIO – II

Implement a MPI program to demonstrate a simple Send and Receive.

MPI is a message passing interface system which facilitates the processors which have
distributed memory architecture to communicate and send & receive messages.

Use the following methods:

MPI_Send(&buffer,count,MPI_INT,destination,tag,MPI_COMM_WORLD);
MPI_Recv(&buffer,count,MPI_INT,source,tag,MPI_COMM_WORLD,&status);

ALGORITHM: MPI’s send and receive calls operate in the following manner. First,
process A decides a message needs to be sent to process B. Process A then packs up all
of its necessary data into a buffer for process B. These buffers are often referred to as
envelopes since the data is being packed into a single message before transmission
(similar to how letters are packed into envelopes before transmission to the post
office). After the data is packed into a buffer, the communication device (which is
often a network) is responsible for routing the message to the proper location. The
location of the message is defined by the process’s rank.
Reg No: 19BCE2028
Name: Pratham Shah
Slot: L47+L48

SOURCE CODE:

OUTPUT :

RESULTS: Simple send and receive functions are demonstrated.

SCENARIO – III

Implement a MPI program to calculate the size of the incoming message.

MPI is a message passing interface system which facilitates the processors which have
distributed memory architecture to communicate and send & receive messages.

ALGORITHM:

1) The length of the message does not have a predefined element in the status structure.
Instead, we have to find out the length of the message with MPI_Get_count.
Reg No: 19BCE2028
Name: Pratham Shah
Slot: L47+L48

2) In MPI_Get_count, the user passes the MPI_Status structure, the datatype of the
message, and count is returned. The count variable is the total number of datatype
elements that were received.

3) MPI_Recv is not guaranteed to receive the entire amount of elements passed as the
argument to the function call. Instead, it receives the amount of elements that were sent
to it (and returns an error if more elements were sent than the desired receive amount).
The MPI_Get_count function is used to determine the actual receive amount.

SOURCE CODE:

EXECUTION:

RESULTS: Using MPI_Get_count, the received data amount was calculated.

You might also like