Navigation

  • index
  • next |
  • previous |
  • mcs572 0.7.8 documentation »

Introduction to Message PassingΒΆ

To program distributed memory parallel computers, we apply message passing.

  • Basics of MPI
    • One Single Program Executed by all Nodes
    • Initialization, Finalization, and the Universe
    • Broadcasting Data
    • Moving Data from Manager to Workers
    • MPI for Python
    • Bibliography
    • Exercises
  • Using MPI
    • Scatter and Gather
    • Send and Recv
    • Reducing the Communication Cost
    • Point-to-Point Communication with MPI for Python
    • Bibliography
    • Exercises
  • Pleasingly Parallel Computations
    • Ideal Parallel Computations
    • Monte Carlo Simulations
    • SPRNG: scalable pseudorandom number generator
    • Bibliography
    • Exercises
  • Load Balancing
    • the Mandelbrot set
    • Static Work Load Assignment
    • Static work load assignment with MPI
    • Dynamic Work Load Balancing
    • Bibliography
    • Exercises
  • Data Partitioning
    • functional and domain decomposition
    • parallel summation
    • An Application
    • Nonblocking Point-to-Point Communication
    • Exercises

Previous topic

High Level Parallel Processing

Next topic

Basics of MPI

This Page

  • Show Source

Quick search

Navigation

  • index
  • next |
  • previous |
  • mcs572 0.7.8 documentation »
© Copyright 2016, Jan Verschelde. Created using Sphinx 1.4.8.