Skip to Main content Skip to Navigation
Conference papers

Transparent overlapping of blocking communication in MPI applications

Abstract : With the growing number of cores and fast network like Infiniband, one of the keys to performance improvement in MPI applications is the ability to overlap CPU-bound computation with network communications. While this can be done manually, this is often a complex and error prone procedure. We propose an approach that allows MPI blocking communication to act as nonblocking communication until data are needed, increasing the potential for communication and computation overlapping. Our approach, COMMMAMA, uses a separate communication thread to which communications are offloaded and a memory protection mechanism to track memory accesses in communication buffers. This guarantees both progress for these communications and the largest window during which communication and computation can be processed in parallel. This approach also significantly reduces the hassle for programmers to design MPI applications as it reduces the need to forecast when nonblocking communication should be waited.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03007204
Contributor : François Trahay <>
Submitted on : Monday, November 16, 2020 - 11:44:27 AM
Last modification on : Wednesday, December 2, 2020 - 5:27:01 PM
Long-term archiving on: : Wednesday, February 17, 2021 - 6:52:02 PM

File

hpcc_final.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03007204, version 1

Citation

Alexis Lescouet, Élisabeth Brunet, François Trahay, Gaël Thomas. Transparent overlapping of blocking communication in MPI applications. HPCC2020: 21st IEEE International Conference on High-Performance Computing and Communications, Dec 2020, Yanuca Island (online), Fiji. pp.1-6. ⟨hal-03007204⟩

Share

Metrics

Record views

38

Files downloads

93