User Tools

Site Tools


publication

Publication details

  • Characterization and translation of OpenMP use cases to MPI using LLVM (Tim Jammer), Master's Thesis, School: Universität Hamburg, 2018-12-04
    Publication details

Abstract

OpenMP makes it fairly easy to program parallel applications. But OpenMP is limited to shared memory systems. Therefore this thesis will explore the possibility to translate OpenMP to MPI by using the LLVM compiler infrastructure. Translating OpenMP to MPI would allow to further scale up parallel OpenMP applications as distributed memory systems may be used as well. This thesis will explore the benefits of the translation to MPI for several classes of parallel applications, that are characterized by similarity regarding computation and data movement. The improved scalability of MPI can be exploited best, if there is a regular communication, like a stencil code. For other cases, like a Branch and Bound algorithm, the performance of the translated program looks promising but further tuning is required in order to be able to fully exploit more CPU cores offered by scaling up to a distributed memory system. However the developed translation does not work well when all to all communication is required, like in a butterfly communication scheme of a fast Fourier transform.

BibTeX

@mastersthesis{CATOOUCTMU18,
	author	 = {Tim Jammer},
	title	 = {{Characterization and translation of OpenMP use cases to MPI using LLVM}},
	advisors	 = {Jannek Squar and Michael Kuhn},
	year	 = {2018},
	month	 = {12},
	school	 = {Universität Hamburg},
	howpublished	 = {{Online \url{https://wr.informatik.uni-hamburg.de/_media/research:theses:tim_jammer_characterization_and_translation_of_openmp_use_cases_to_mpi_using_llvm.pdf}}},
	type	 = {Master's Thesis},
	abstract	 = {OpenMP makes it fairly easy to program parallel applications. But OpenMP is limited to shared memory systems. Therefore this thesis will explore the possibility to translate OpenMP to MPI by using the LLVM compiler infrastructure. Translating OpenMP to MPI would allow to further scale up parallel OpenMP applications as distributed memory systems may be used as well. This thesis will explore the benefits of the translation to MPI for several classes of parallel applications, that are characterized by similarity regarding computation and data movement. The improved scalability of MPI can be exploited best, if there is a regular communication, like a stencil code. For other cases, like a Branch and Bound algorithm, the performance of the translated program looks promising but further tuning is required in order to be able to fully exploit more CPU cores offered by scaling up to a distributed memory system. However the developed translation does not work well when all to all communication is required, like in a butterfly communication scheme of a fast Fourier transform.},
}

publication.txt · Last modified: 2019-01-23 10:26 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki