Unix Support

Parallel Programming: Introduction to MPI

This is an introduction to using MPI for writing parallel programs to run on clusters and multi-CPU systems, largely for the purposes of "high-performance computing". It will cover all of the principles of MPI, and teach the use of all of the basic facilities of MPI (i.e. the ones that are used in most HPC applications), so that attendees will be able to write serious programs using it and update ones they get from other people.

All examples are given in Fortran 90, C and C++, and attendees can use whichever of these they prefer for the practicals.

The first three lectures cover most of the fundamentals of using MPI in real programs and programming in simple collective communication (similar to SIMD - Single Instruction, Multiple Data):

01a: Introduction (also in the form of a Handout for the MPhil )

01b: Using MPI (also in the form of a Handout for the MPhil )

02: Datatypes and Collectives (also in the form of a Handout for the MPhil )

The next three cover slightly more advanced topics, including the basics of point-to-point communication, and cover the remainder of the main facilities that most programmers will need:

03: Point-to-Point Transfers (also in the form of a Handout for the MPhil )

04: Error Handling (also in the form of a Handout for the MPhil )

05: More on Collectives (also in the form of a Handout for the MPhil )

The next two include asynchronous (non-blocking) communication and communication between subsets of processes, which only some programmers will need, but they are very important for some applications:

06: More on Point-to-Point (also in the form of a Handout for the MPhil )

07: Communicators etc. (also in the form of a Handout for the MPhil )

The next two are a summary of the most critical points from later lectures on the practical use of MPI, and a description of how problems can be split between processes:

08: Miscellaneous Guidelines (also in the form of a Handout for the MPhil )

09: Problem Decomposition (also in the form of a Handout for the MPhil )

The next three cover unfortunately complicated aspects, which are needed to avoid problems in large, portable, production codes; they are not part of the `core' course:

10: 10: Composite Types and Language Standards

11: 11: Attributes and I/O

12: 12: Debugging, Performance and Tuning

The last three cover aspects that will not affect people who use only the facilities recommended in this course, but may affect people working on MPI programs written by others, and are needed by people who want to go further with MPI:

13: 13: One-sided Communication

14: 14: Advanced Completion Issues

15: 15: Other Features Not Covered

Auxiliary Material

Practical exercises to use the facilities taught

Programs and data used in the practicals

Specimen answers to the exercises in Fortran 90, C and C++

Interface proformas for use in the practicals

Code to provide a globally consistent POSIX timer

An example of how to use the profiling facility

An MPI timer/tester that written for HPC benchmarking

The title of this document is: Parallel Programming: Introduction to MPI
URL: http://www-uxsup.csx.cam.ac.uk/courses/moved.MPI/index.html