PARALLEL COMPUTING - A CASE STUDY ON MPI APPLICATION PROGRAMMING INTERFACE Nur Adilah Zulkiflee #1, Mohamed Faidz Mohamed Said #2 # Faculty of Computer & Mathematical Sciences, Universiti Teknologi MARA 70300 Seremban, Negeri Sembilan, MALAYSIA 1 adilahzulkiflee@gmail.com 2 faidzms@ieee.org Abstract—Message passing interface (MPI) has been normally recognized as the best interface option for passing message in parallel computing backgrounds. MPI is a very general de-facto parallel programming application programming interface (API) for scattered memory systems. It is sometimes uncertain to decide the necessary network architecture because the passing interface mainly caters its location-based processes by addressing the required transport method to be applied. Overall this research paper would focus on the definition and the application of the passing interface. The objectives of this research are to study the MPI applications in the discipline of parallel computing in real world. The result shows that the passing interface has several advantages and is suitably used in parallel computing. Moreover, it has been applied for python and big data. However, on the other side, MPI has some disadvantages and these include being easy to make mistakes and hard to debug. For recommendation, it would be beneficial to experiment more specific studies on the performance sensitivity issues of the applications based on its memory latency as well as its parameters. Keyword: message passing interface, parallel computing REFERENCES [1] G. Li, R. Palmer, M. DeLisi, G. Gopalakrishnan, and R. M. Kirby, "Formal specification of MPI 2.0: Case study in specifying a practical concurrent programming API," Scienceof computer programming, pp. 65-81, 2011. [2] R. Hempel and D. W. Walker, "The emergence of the MPI message passing standard for parallel computing," Computer Standard & Interfaces, vol. 21, pp. 51-62, 1999. [3] L. Dalcín, R. Paz, M. Storti, and J. D’Elía, "MPI for Python: Performance improvements and MPI-2 extensions," Journal of parallel and distributed computing, pp. 655-662, 2007. [4] L. Dalcín, R. Paz, and M. Storti, "MPI for Python " Journal of Parallel and Distributed Computing, vol. 65, pp. 1108-1115, 2005. [5] O. Vega-Gisbert, J. E. Roman, and J. M. Squyres, "Design and implementation of Java bindings in Open MPI," Parallel Computing, vol. 59, pp. 1-20, 2016. [6] S. B. Lee, "Numerical discrepancy between serial and MPI parallel computations," International Journal of Naval Architecture and Ocean Engineering, vol. 8, pp. 434-441, 2016. [7] G. E. Fagg and J. J. Dongarra, "HARNESS fault tolerant MPI design, usage and performance issues," Future generation computer system, pp. 1127-1142, 2002. [8] I. Cores, M. Rodriguez, P. Gonzalez, and M. J. Martín, "Reducing the overhead of an MPI application-level migration approach," Parallel Computing, vol. 54, pp. 72-82, 2016. [9] L. Espinola, D. Franco, and E. Luque, "MCM: A new MPI Communication Management for Cloud Situation," PROCEDIA - Computer Science, no. 2303-2307, 2017. [10] H. Jin, D. Jespersen, P. Mehrotra, R. Biswas, L. Huang, and B. Chapman, "High performance computing using MPI and OpenMP on multi-core," Parallel Computing, vol. 37, pp. 562-575, 2011. [11] D. Ibanez, I. Dunn, and M. S.Shephard, "Hybrid MPI-thread parallelization of adaptive mesh operations," Parallel Computing, vol. 52, pp. 133-143, 2016. [12] P. Dorozynski et al., "Checkpointing of Parallel MPI Applications using MPI One-sided API with Support for Byte-addressable Non-volatile RAM," Procedia Computer Science, vol. 80, pp. 30-40, 2016. [13] D. R. Martıneza, V. Blancob, J. C. Cabaleiroa, and F. F. R. T. F. Penaa, "Automatic Parameter Assessment of LogP-based Communication Models in MPI Situations," procedia Computer Science, vol. 1, pp. 2155-2164, 2012. [14] D. LaSalle and G. Karypis, "MPI for Big Data New tricks for an old dog," Parallel Computing, vol. 40, pp. 754-767, 2014. [15] N. A. Zulkiflee. (2017). Parallel Computing – A Case Study On MPI API. Available: http://youtube.com/watch?v=u-DYD2cLimA. [Accessed: 27-Nov-2017]