You are on page 1of 8

(1) Write a c program using open MP to estimate the value of PI (use

minimum two methods)

CODE:

FIRST METHOD:

#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include<omp.h>
#define PI 3.1415926538837211
int main()
{
int Noofintervals, i;
float sum, x, totalsum, h, partialsum, sumthread;
printf("Enter number of intervals\n");
scanf("%d", &Noofintervals);

if (Noofintervals <= 0) {
printf("Number of intervals should be positive integer\n");
exit(1);
}
sum = 0.0;
h = 1.0 / Noofintervals;
#pragma omp parallel for private(x) shared(sumthread)
for (i = 1; i < Noofintervals + 1; i = i + 1) {
x = h * (i - 0.5);
#pragma omp critical
sumthread = sumthread + 4.0 / (1 + x * x);
}
partialsum = sumthread * h;
#pragma omp critical
sum = sum + partialsum;
printf("The value of PI is \t%f \nerror is \t%1.16f\n", sum, fabs(sum -
PI));
}

SECOND METHOD:

#include <stdio.h> #include <omp.h>


#include <stdlib.h>
#include <math.h>
#define MAX_THREADS 4

static long num_steps = 100000000; double step;


int main()
{
int i, j;
double pi, full_sum = 0.0;
double start_time, run_time;
double sum[MAX_THREADS];

step = 1.0 / (double)num_steps;


for (j = 1; j <= MAX_THREADS; j++){
omp_set_num_threads(j);
full_sum = 0.0;
start_time = omp_get_wtime();
#pragma omp parallel private(i)
{
int id = omp_get_thread_num();
int numthreads = omp_get_num_threads();
double x;

double partial_sum = 0;

#pragma omp single


printf(" num_threads = %d", numthreads);
for (i = id; i< num_steps; i += numthreads){
x = (i + 0.5)*step;
partial_sum += +4.0 / (1.0 + x*x);
}
#pragma omp critical
full_sum += partial_sum;
}

pi = step * full_sum;
run_time = omp_get_wtime() - start_time;
printf("\n pi is %f in %f seconds %d threds \n ", pi, run_time, j);
}
}

RESULT:

METHOD-1:
METHOD-2:

(2) Compare and contrast open MP with MPI. Mention the features of
MPI.

     MPI OpenMP

1 . Available from different vendor and can be compiled 1 .OpenMP are hooked with compiler so
in desired platform with desired compiler. One can use with gnu compiler and with Intel compiler
any of MPI API i.e MPICH, OpenMPI or other one have specific implementation. User is
at liberty with changing compiler but not
with openmp implementation.

2. MPI support C,C++ and FORTRAN 2.OpenMP support C,C++ and FORTRAN

3.OpenMPI one of  API for MPI is providing provisional 3.Few projects try to replicate openmp for
support for Java Java.

4. MPI target both distributed as well shared memory 4.OpenMP target only shared memory
system system

5.Based on both process and thread based approach .


(Earlier it was mainly process based parallelism but now 5.Only thread based parallelism.
with MPI 2 and 3 thread based parallelism is there too.
Usually a process can contain more than 1 thread and
call MPI subroutine as desired
6. Overhead for creating process is one time 6. Depending on implementation threads
can be created and joined for particular task
which add overhead
7.There are overheads associated with transferring 7.No such overheads, as thread can share
message from one process to another variables

8. Process in MPI  has private variable only, no shared 8.  In OpenMP , threads have both private
variable as well shared variable

9.Data racing is not there if not using 9. Data racing is inherent in OpenMP model
any thread in process .

10.Compilation of MPI program require 10. Need to add  omp.h and then can
    1. Adding header file : #include "mpi.h" directly compile code with -fopenmp in
    2. compiler as:(in linux ) Linux environment
     mpic++  mpi.cxx -o mpiExe    g++ -fopenmp openmp.cxx -o
  openmpExe
(User need to set environment variable PATH and
LD_LIBRARY_PATH to MPI as OpenMPI installed
folder or binaries) (For Linux)

Features of MPI:
• Supports both point to point communication and collective
communication
• Provides support for the design of safe, modular parallel software
libraries
• General or derived datatypes
• Provides a rich set of collective communication routines
• Blocking and nonblocking versions of the routines are provided
• Buffering is available
• Message ordering

(3) Write a c program using MPI to print hello world, to print


environmental information.

CODE:
#include <stdio.h>
#include <mpi.h>

main(int argc, char **argv)


{
int ierr, num_procs, my_id;

ierr = MPI_Init(&argc, &argv);

/* find out MY process ID, and how many processes were started. */

ierr = MPI_Comm_rank(MPI_COMM_WORLD, &my_id);


ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_procs);

printf("Hello world! I'm process %i out of %i processes\n",


my_id, num_procs);

ierr = MPI_Finalize();
}

(4) Write a c program using MPI to perform arithmetic operations and find the
maximum and minimum among three numbers

include <mpi.h>

#include<stdio.h>

int main(int argc, char **argv)

int node,a,b,c,c1,c2,c3,c4;

a=10;

b=20;

c=30;

MPI_Init(&argc,&argv);

MPI_Comm_rank(MPI_COMM_WORLD, &node);
if( node == 0 ) {

/* do some work as process 0 */

c1=a+b;

else if( node == 1 ) {

/* do some work as process 1 */

c2=a-b;

else if( node == 2 ) {

//do some work as process 2

c3=a*b;

else if( node==3){

/* do this work in any remaining processes */

c4=a%b;

Else if(node==4){

C5= a>b;

Else{

C6=a<b;

/* Stop this process */

printf("The addittion of %d and %d is %d \n",a,b,c1);

printf("The subtraction of %d and %d is %d \n",a,b,c2);

printf("The multiplication of %d and %d is %d \n",a,b,c3);

printf("The modulo of %d and %d is %d \n",a,b,c4);

MPI_Finalize();
return 0;

You might also like