Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword or section
Like this
10Activity

Table Of Contents

1.4.2. Minimizing Communication
1.4.3. Overlapping Communication and Computation
1.5. Self Test
1.6. Course Problem
2.1. The Message Passing Model
2.2. What is MPI?
2.3. Goals of MPI
2.4. Why (Not) Use MPI?
2.5. Basic Features of Message Passing Programs
2.6. A First Program: Hello World!
2.7. Point-to-Point Communications and Messages
2.8. Communication Modes and Completion Criteria
2.9. Blocking and Nonblocking Communication
2.10.1. Broadcast Operations
2.10.2. Gather and Scatter Operations
2.10.3. Reduction Operations
2.11. Compiling and Running MPI Programs
2.12. Self Test
2.13. Course Problem
3.1. Generic MPI Program
3.2. MPI Header Files
3.3. MPI Naming Conventions
3.4. MPI Routines and Return Values
3.5. MPI Handles
3.6. MPI Datatypes
3.7. Basic MPI Datatypes - C
3.8. Basic MPI Datatypes - Fortran
3.9. Special MPI Datatypes (C)
3.10. Initializing MPI
3.11.1. Getting Communicator Information: Rank
3.11.2. Getting Communicator Information: Size
3.12. Terminating MPI
3.13. Hello World! mk. 2 (C version)
3.14. Hello World! mk. 2 (Fortran version)
3.15. Sample Program Output
3.16. Self Test
3.17. Course Problem
4.1.1. Source and Destination
4.1.2. Messages
4.1.3. Sending and Receiving Messages
4.2.1. Sending a Message: MPI_SEND
4.2.2. Receiving a Message: MPI_RECV
4.2.3. Example: Send and Receive
4.2.4. What Happens at Runtime
4.2.5. Blocking and Completion
4.2.6.1. Avoiding Deadlock
4.2.6.2. Avoiding Deadlock (Sometimes but Not Always)
4.2.6.3. Probable Deadlock
4.3.1. Posting, Completion, and Request Handles
4.3.2. Posting Sends without Blocking
4.3.4.2. Completion: Testing
4.3.5. Advantages and Disadvantages
4.3.6. Send/Receive Example
4.4.1. Standard Mode Send
4.4.2. Synchronous, Ready Mode, and Buffered Send
4.4.3. Naming Conventions and Calling Sequences
4.5. Self Test
4.6. Course Problem
5.1. Multiple Messages
5.2. Copying Data into a Buffer
5.3. A Tempting Wrong Way to Extend Buffering
5.4. Buffering the Right Way
5.5. Packing
5.6. Using MPI Derived Types for User-Defined Types
5.7. Other Ways of Defining MPI Derived Types
5.8. Message Matching and Mismatching
5.9. Controlling the Extent of a Derived Type
5.10. Obtaining Information About Your Derived Types
6.2. Broadcast
6.3. Reduction
6.4. Gather
6.5. Scatter
6.6. Advanced Operations
6.7. Self Test
6.8. Course Problem
7.1. MPI_COMM_GROUP
7.2. MPI_GROUP_INCL
7.3. MPI_GROUP_EXCL
7.4. MPI_GROUP_RANK
7.5. MPI_GROUP_FREE
7.6. MPI_COMM_CREATE
7.7. MPI_COMM_SPLIT
7.8.1. Example on Usages of Group Routines
7.8.2. Example on Usages of MPI_COMM_SPLIT
7.9. Self Test
7.10. Course Problem
8.1.1. MPI_CART_CREATE
8.1.2. MPI_CART_COORDS
8.1.3. MPI_CART_RANK
8.1.4. MPI_CART_SUB
8.1.5. MPI_CARTDIM_GET
8.1.6. MPI_CART_GET
8.1.7. MPI_CART_SHIFT
8.2.1. Matrix Transposition
8.2.2.1. Fundamentals
8.2.2.2. Jacobi Scheme
8.2.2.3. Parallel Jacobi Scheme
8.2.2.4. SOR Scheme
8.2.2.5. Parallel SOR Scheme
8.2.2.6. Scalability Plot of SOR
8.3. Self Test
8.4. Course Problem
9.1. Introduction
9.2. Applications
9.3. Characteristics of Serial I/O
9.4. Characteristics of Parallel I/O
9.5. Introduction to MPI-2 Parallel I/O
9.6.1. Displacement
9.6.2. Elementary Datatype
9.6.3. Filetype
9.6.4. View
9.6.5. Combining Views of Multiple Processes
9.7. Initializing MPI-2 File I/O
9.8. Defining a View
9.9.1. MPI_FILE_READ_AT
9.9.2. MPI_FILE_READ
9.9.3. MPI_FILE_READ_SHARED
9.10.1. MPI_FILE_WRITE_AT
9.10.2. MPI_FILE_WRITE
9.10.3. MPI_FILE_WRITE_SHARED
9.11. Closing MPI-2 File I/O
9.12. MPI-2 File I/O - Example Problem 1, Individual File Pointers
10.1. Introduction to ScaLAPACK
10.2. Serial and Parallel Mathematical Libraries
10.3. How to Use a ScaLAPACK Routine: Step-by-Step
10.4. Processor Grid Creation
10.5. ScaLAPACK Data Distribution
10.6. Case Study
10.7. Self Test
10.8. Course Problem
11.1. Course Problem
12.1.1. Performance Metrics
12.1.2. Simple Models
12.2. Developing Better Models
12.3. Evaluating Implementations
12.4. Performance Tools
12.5.1. Serial Profiling Tools
12.5.2. Parallel Profiling Tools
12.6. Self Test
12.7. Course Problem
13.1. Recursive Halving and Doubling
13.2.1. Example 1: Matrix-vector Multiplication
13.2.2. Example 2: Matrix-matrix Multiplication
13.2.3. Example 3: Poisson Equation
13.2.4. Example 4: Matrix-vector Multiplication (Client Server)
14. Complete Reference Listing
15. Course Evaluation Form
0 of .
Results for:
No results containing your search query
P. 1
Mpi Course

Mpi Course

Ratings: (0)|Views: 1,276|Likes:
Published by corina_mandis

More info:

Published by: corina_mandis on Sep 15, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

12/01/2012

pdf

text

original

You're Reading a Free Preview
Pages 5 to 44 are not shown in this preview.
You're Reading a Free Preview
Pages 49 to 64 are not shown in this preview.
You're Reading a Free Preview
Pages 69 to 151 are not shown in this preview.
You're Reading a Free Preview
Pages 160 to 202 are not shown in this preview.

Activity (10)

You've already reviewed this. Edit your review.
1 hundred reads
1 thousand reads
sorry0761 liked this
Carlos Espinosa liked this
ydeniz86 liked this
bcetek8889 liked this
bcetek8889 liked this

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->