Lecture 19: The Message Passing Interface

../_images/L19-title.png Lecture 19 slides Lecture 19 panopto Podcast

(Originally recorded 2019-06-04)

We continue to develop the message passing paradigm for distributed memory parallel computing, looking in more depth at principal abstractions in MPI (communicator, message envelope, message contents) and two variants of “six-function MPI”. Laplace’s equation is part of the HPC canon so we do a deep dive on approaches for solving Laplace’s equation (iteratively) with MPI – which leads to the the compute-communicate (aka BSP) pattern.