MPI under Condor

This information pertains mainly to local (UW astro department) Condor usage... You can also take advantage of the gigabit network in the undergraduate computer lab to run your simmi simmis. In fact, you can use the existing condor system to do this. So, if you ran pkdgrav/gasoline through condor before, you can now do it faster. There are a few things you need to do first... The current stable release of condor (6.6) does not use lam (the next version will...). Instead, it uses mpich. So, if you had compiled pkdgrav to run in parallel before, it is likely that you compiled it using the LAM mpi compilers.

Compiling for Condor-MPI usage
The good news is, you don't really have to do much to make it work with condor. Set up your pkdgrav/gasoline Makefile the way you normally do. Then, at the prompt, set your path to include the directory of the mpich compiler - the one that works with condor is in one of my local directories (you can compile it for yourself if you want to for some reason):
setenv PATH /astro/users/roskar/nbody/mpich-1.2.4/bin:$PATH

for cshell or
export PATH=/astro/users/roskar/nbody/mpich-1.2.4/bin:$PATH

for bash shell. Make sure that the above directory is the first thing in your path by typing echo $PATH. Now, to make sure that everything is done cleanly, first type make spotless to remove any object files made with other compilers. Also delete the file ../mdl/mpi/mdl.o. Pkdgrav compiled with the intel compiler icc seems to work about 1/3 faster than binaries compiled with standard compilers. You need to do a few things to use the intel compiler. First, run the script which will set your path correctly by doing source /net/intel/compiler81/bin/iccvars.csh for the cshell or iccvars.sh for the bash shell. Next, you need to alter some lines in the "SP1/2 defines" section in your Makefile. Make sure they read as follows:
SPX_CFLAGS = -O3 -xK -ipo -I$(SPX_MDL) $(CODEDEF) SPX_LD_FLAGS = -L/astro/users/roskar/nbody/mpich-1.2.4/lib SPX_LIBMDL = $(SPX_MDL)/mdl.o -lm SPX_MDL_CFLAGS = -O3 -xK -ipo

The last thing you need to know is that you can only submit MPI jobs from the server carrion. You may also notice that it takes a while for your jobs to start .type make mpi.this is because the condor negotiator needs to reserve several machines for you to use. but specify universe = mpi machine_count = 4 The universe keyword just tells condor that you want to run in parallel. Setting up your condor job Set up your condor job like you normally would (see the Condor tutorial). I've found that setting my timesteps such that checkpoints are written once every 30 minutes works OK.You will use the mpicc compiler. if you try running on all 18 computers during the weekday. . In any case. jobs running on 4 machines seem to sometimes get kicked off within an hour. because someone is bound to sit down at one of the keyboards very soon. So. your jobs can get preempted by other users with a better priority. So. I'm still a bit skeptical about the usefuleness of running condor-mpi during the weekdays.astro. So.edu. but invokes another C compiler and linker to do the actual work. just as if they were serial jobs. you'll need to log in to carrion and then submit your job. and this process can take a few negotiation cycles. A few caveats Your jobs will get kicked off if someone starts using any of the machines you are running on. you need to set two more environment variables: setenv MPICH_CC icc setenv MPICH_CLINKER icc Now you're ready to go . Don't worry about telling it which machines to run on: only the astrolab computers are configured to accept MPI jobs. During the day. checkpoint often. The new pkdgrav/gasoline executable is condor-mpi-compatible and built with the Intel compiler. your job likely won't run for very long. Set up the rest of your condor job configuration just like you would for a regular condor job. and machine_count * gasp * tells it how many computers you want.. except make sure that the binary you specify is the newly-compiled mpi binary.washington.. This compiler makes sure the proper mpi libraries are used. Also.

Another problem.!!! . You may need to set some environment variables by hand. run your simulation for just a few steps using different numbers of processors to see what number of machines is sensible to use. especially if you are using pkdgrav since the PKDGRAV_CHECKPOINT_FDL variable is required for proper checkpointing. This certainly destroys some of the Condor distributed computing model. The DedicatedScheduler user is the "user" that actually gets taxed for allocating resources. I "solve" this by only running one job at a time. Eventually. This means that if your prioirity is worse than that of the DedicatedScheduler.. Enjoy the improved performance of your condor jobs... but I have not been able to come up with a better alternative. Remember that Condor has a limit of 10240 characters for the environment variables. So.to run your other job! I've seen it get caught up in somewhat of a stalemate a few times. which makes you dream of running many many jobs at once. which to me appears to be a fundamental bug in the way condor handles MPI resource allocation. so make sure that you are not accidentally removing old ones by adding new ones. It also appears that Condor-MPI jobs do not copy the environment properly. the cost of communication between the nodes will allow for only a marginal improvement of performance.. is that if you run multiple MPI jobs they eventually appear to start competing with one another. It doesn't make sense to simply run your job on the maximum number of possible machines. etc. since the DedicatedScheduler just hangs on to resources without ever having jobs run on them for very long. which is an enormous waste of compute resources. even if the getenv = true is set. This way you also won't hog the entire undergrad lab with just one job..variable2=value2. then it could kick your job off . This is done by using environment = variable1=value1.