User Tools

Site Tools


publication

Publication details

Abstract

Many scientific applications use OpenMP as a relatively easy and fast approach to utilise symmetric multiprocessor systems at their full capacity. However, scalability on shared memory systems is limited and thus distributed parallel computing is inevitable if the full potential through horizontal scaling shall be achieved. Additional software layers like MPI must be used, which require further knowledge on the scientific developers' side. This paper presents CATO, a tool prototype using LLVM and Clang, to transform existing OpenMP code to MPI; this enables distributed code execution while keeping OpenMP's relatively low barrier of entry. The main focus lies on increasing the maximum problem size, which a scientific application can work on; converting an intra-node problem into an inter-node problem makes it possible to overcome the limitation of memory of a single node. Our tool does not focus on improving the absolute runtime, even though it might improve it by e.g. introducing concurrency during the I/O phase; but we rather focus on increasing the maximal problem size and our benchmark of a stencil code shows promising results: The transformation preserves the speedup trend of the code to some extent. Another example demonstrates the capability to increase the maximum problem size while using additional compute nodes.

BibTeX

@inproceedings{CASTOOKSJB20,
	author	 = {Jannek Squar and Tim Jammer and Michael Blesel and Michael Kuhn and Thomas Ludwig},
	title	 = {{Compiler Assisted Source Transformation of OpenMP Kernels}},
	year	 = {2020},
	month	 = {07},
	booktitle	 = {{2020 19th International Symposium on Parallel and Distributed Computing (ISPDC)}},
	publisher	 = {IEEE},
	pages	 = {44--51},
	conference	 = {ISPDC 2020},
	location	 = {Warsaw, Poland},
	isbn	 = {978-1-7281-8947-5},
	doi	 = {https://doi.org/10.1109/ISPDC51135.2020.00016},
	abstract	 = {Many scientific applications use OpenMP as a relatively easy and fast approach to utilise symmetric multiprocessor systems at their full capacity. However, scalability on shared memory systems is limited and thus distributed parallel computing is inevitable if the full potential through horizontal scaling shall be achieved. Additional software layers like MPI must be used, which require further knowledge on the scientific developers' side. This paper presents CATO, a tool prototype using LLVM and Clang, to transform existing OpenMP code to MPI; this enables distributed code execution while keeping OpenMP's relatively low barrier of entry. The main focus lies on increasing the maximum problem size, which a scientific application can work on; converting an intra-node problem into an inter-node problem makes it possible to overcome the limitation of memory of a single node. Our tool does not focus on improving the absolute runtime, even though it might improve it by e.g. introducing concurrency during the I/O phase; but we rather focus on increasing the maximal problem size and our benchmark of a stencil code shows promising results: The transformation preserves the speedup trend of the code to some extent. Another example demonstrates the capability to increase the maximum problem size while using additional compute nodes.},
	url	 = {https://ieeexplore.ieee.org/document/9201895},
}

publication.txt · Last modified: 2019-01-23 10:26 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki