User Tools

Site Tools


publication

Publication details

  • Tracing Internal Communication in MPI and MPI-I/O (Julian Kunkel, Yuichi Tsujita, Olga Mordvinova, Thomas Ludwig), In International Conference on Parallel and Distributed Computing, Applications and Technologies, PDCAT, pp. 280–286, IEEE Computer Society (Washington, DC, USA), PDCAT-09, Hiroshima University, Higashi Hiroshima, Japan, ISBN: 978-0-7695-3914-0, 2009-12-29
    Publication detailsDOI

Abstract

MPI implementations can realize MPI operations with any algorithm that fulfills the specified semantics. To provide optimal efficiency the MPI implementation might choose the algorithm dynamically, depending on the parameters given to the function call. However, this selection is not transparent to the user. While this abstraction is appropriate for common users, achieving best performance with fixed parameter sets requires knowledge of internal processing. Also, for developers of collective operations it might be useful to understand timing issues inside the communication or I/O call. In this paper we extended the PIOviz environment to trace MPI internal communication. Thus, this allows the user to see PVFS server behavior together with the behavior in the MPI application and inside MPI itself. We present some analysis results for these capabilities for MPICH2 on a Beowulf Cluster

BibTeX

@inproceedings{TICIMAMKTM09,
	author	 = {Julian Kunkel and Yuichi Tsujita and Olga Mordvinova and Thomas Ludwig},
	title	 = {{Tracing Internal Communication in MPI and MPI-I/O}},
	year	 = {2009},
	month	 = {12},
	booktitle	 = {{International Conference on Parallel and Distributed Computing, Applications and Technologies, PDCAT}},
	publisher	 = {IEEE Computer Society},
	address	 = {Washington, DC, USA},
	pages	 = {280--286},
	conference	 = {PDCAT-09},
	organization	 = {Hiroshima University},
	location	 = {Higashi Hiroshima, Japan},
	isbn	 = {978-0-7695-3914-0},
	doi	 = {http://dx.doi.org/10.1109/PDCAT.2009.9},
	abstract	 = {MPI implementations can realize MPI operations with any algorithm that fulfills the specified semantics. To provide optimal efficiency the MPI implementation might choose the algorithm dynamically, depending on the parameters given to the function call. However, this selection is not transparent to the user. While this abstraction is appropriate for common users, achieving best performance with fixed parameter sets requires knowledge of internal processing. Also, for developers of collective operations it might be useful to understand timing issues inside the communication or I/O call. In this paper we extended the PIOviz environment to trace MPI internal communication. Thus, this allows the user to see PVFS server behavior together with the behavior in the MPI application and inside MPI itself. We present some analysis results for these capabilities for MPICH2 on a Beowulf Cluster},
}

publication.txt · Last modified: 2019-01-23 10:26 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki