TY - JOUR
T1 - Comparison of message passing interface and hybrid programming models to solve pressure equation in distributed memory system
AU - Jeon, Byoung Jin
AU - Choi, Hyoung Gwon
N1 - Publisher Copyright:
© 2015 The Korean Society of Mechanical Engineers.
PY - 2015
Y1 - 2015
N2 - The message passing interface (MPI) and hybrid programming models for the parallel computation of a pressure equation were compared in a distributed memory system. Both models were based on domain decomposition, and two numbers of the sub-domain were selected by considering the efficiency of the hybrid model. The parallel performances for various problem sizes were measured using up to 96 threads. It was found that in addition to the cache-memory size, the overhead of the MPI communication/OpenMP directives affected the parallel performance. For small problems, the parallel performance was low because the percentage of the overhead of the MPI communication/OpenMP directives increased as the number of threads increased, and MPI was better than the hybrid model because it had a smaller communication overhead. For large problems, the parallel performance was high because, in addition to the cache effect, the percentage of the communication overhead was relatively low compared to that for small problems, and the hybrid model was better than MPI because the communication overhead of MPI was more dominant than that of the OpenMP directives in the hybrid model.
AB - The message passing interface (MPI) and hybrid programming models for the parallel computation of a pressure equation were compared in a distributed memory system. Both models were based on domain decomposition, and two numbers of the sub-domain were selected by considering the efficiency of the hybrid model. The parallel performances for various problem sizes were measured using up to 96 threads. It was found that in addition to the cache-memory size, the overhead of the MPI communication/OpenMP directives affected the parallel performance. For small problems, the parallel performance was low because the percentage of the overhead of the MPI communication/OpenMP directives increased as the number of threads increased, and MPI was better than the hybrid model because it had a smaller communication overhead. For large problems, the parallel performance was high because, in addition to the cache effect, the percentage of the communication overhead was relatively low compared to that for small problems, and the hybrid model was better than MPI because the communication overhead of MPI was more dominant than that of the OpenMP directives in the hybrid model.
KW - Bi-Conjugate gradient
KW - Distributed memory system
KW - Hybrid parallel model
KW - Message passing interface(MPI)
KW - OpenMP directives
UR - https://www.scopus.com/pages/publications/84938272179
U2 - 10.3795/KSME-B.2015.39.2.191
DO - 10.3795/KSME-B.2015.39.2.191
M3 - Article
AN - SCOPUS:84938272179
SN - 1226-4881
VL - 39
SP - 191
EP - 197
JO - Transactions of the Korean Society of Mechanical Engineers, B
JF - Transactions of the Korean Society of Mechanical Engineers, B
IS - 2
ER -