Navigation

  • index
  • next |
  • previous |
  • mcs572 1.3.5 documentation »
  • Introduction to Message Passing

Introduction to Message PassingΒΆ

To program distributed memory parallel computers, we apply message passing.

  • Basics of MPI
    • One Single Program Executed by all Nodes
    • Initialization, Finalization, and the Universe
    • Broadcasting Data
    • Moving Data from Manager to Workers
    • MPI for Python
    • MPI wrappers for Julia
    • Bibliography
    • Exercises
  • Using MPI
    • Scatter and Gather
    • Send and Recv
    • Reducing the Communication Cost
    • Point-to-Point Communication with MPI for Python
    • Point-to-Point Communication with the MPI wrappers in Julia
    • Bibliography
    • Exercises
  • Pleasingly Parallel Computations
    • Ideal Parallel Computations
    • Monte Carlo Simulations
    • SPRNG: scalable pseudorandom number generator
    • Bibliography
    • Exercises
  • Load Balancing
    • the Mandelbrot set
    • Granularity
    • Static Work Load Assignment
    • Static work load assignment with MPI
    • an implementation with mpi4py
    • Dynamic Work Load Balancing
    • probing in Python and Julia
    • Scalability
    • Bibliography
    • Exercises
  • Handson Supercomputing
    • working on a fast workstation
    • using a real supercomputer
  • Data Partitioning
    • functional and domain decomposition
    • parallel summation
    • An Application
    • Nonblocking Point-to-Point Communication
    • Exercises

Previous topic

High Level Parallel Processing

Next topic

Basics of MPI

This Page

  • Show Source

Quick search

Navigation

  • index
  • next |
  • previous |
  • mcs572 1.3.5 documentation »
  • Introduction to Message Passing
© Copyright 2016-2024, Jan Verschelde. Created using Sphinx 7.3.7.