Lecture 17: Distributed Memory

../_images/L17-title.png Lecture 15 slides Lecture 17 panopto Podcast

(Originally recorded 2019-05-28)

In this lecture we introduce a new paradigm in parallelism: distributed memory. Distributed memory computing is the easiest way to scale up a system – one just adds more computers. But it is also the most challenging type of system to program and to achieve actual scalability. All of those computers are separate computers and the “parallel program” running on them is an illusion – each computer in the distributed system is running its own separate program. An aggregate computation is comprised from these individual programs through specific communication and coordination operations (which, again, are local operations in each program).

Abstractly, this mode of computation is known as “communicating sequential processes.” The realization of this model in modern HPC is known as “Single Program, Multiple Data” (SPMD) and carried out in practice with the Message Passing Interface (MPI).