Transparent Gif

Department of Computer Science

University of California, Santa Barbara

Abstract

Compile/Run-time Support for Threaded MPI Execution onMultiprogrammed Shared Memory Machines

by: Hong Tang, Kai Shen, and Tao Yang

Abstract:

MPI is a message-passing standard widely used for developing high-performanceparallel applications. Because of the restriction in the MPI computationmodel, conventional implementations on shared memory machines map each MPI nodeto an OS process, which suffers serious performance degradation in the presenceof multiprogramming, especially when a space/time sharing policy is employed inOS job scheduling.In this paper, we study compile-time and run-time support for MPI by usingthreads and demonstrate our optimization techniques for executing a large classof MPI programs written in C. The compile-time transformation adoptsthread-specific data structures to eliminate the use of global and staticvariables in C code. The run-time support includes an efficient point-to-pointcommunication protocol based on a novel lock-free queue management scheme. Ourexperiments on SGI Origin 2000 show that the MPI execution optimized by usingthe proposed techniques is competitive with SGI\'s native MPI implementation indedicated environments, and has great performance advantages with upto 23-foldimprovement in multiprogrammed environments.

Keywords:

Threaded execution, lock-free communication, message-passing,program transformation

Date:

November 1998

Document: 1998-30

XHTML Validation | CSS Validation
Updated 14-Nov-2005
Questions should be directed to: webmaster@cs.ucsb.edu