PARALLEL PROGRAMMING USING MPI - A CASE STUDY ON HELLO WORLD Amira Adila bt Abdul Manab #1, Mohamed Faidz Mohamed Said #2 # Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA 70300 Seremban, Negeri Sembilan, MALAYSIA 1 amiraadila220195@gmail.com 2 faidzms@ieee.org Abstract—All modern information processing systems are equipped with multicore processors, and the majority of them also have graphics cards for carrying out vector calculations. Every aspiring coder should know the techniques of latitude and distributed computer programmes. This paper presents recent trends in programming and applications using central and whole graphics processing. Several working of the MPI standards are shown, including the open source LAM/MPI and MPICH implementations as well as Sun MPI, an example of a vendor-supplied MPI implementation. Different aspects and perspectives are investigated, such as supported MPI features, system architecture, network hardware, and operating system. Graph-oriented programming (GOP), a high-level abstraction for message passing applications based on MPI, are also included as a part of development research especially on the low-level approach taken by MPI. The paper concludes with an outlook on the future of the MPI standard and its implementations, and how they are influenced by recent trends in cluster computing. Keywords: parallel programming, MPI, Hello World REFERENCES [1] R. Priedhorsky and T. C. Randles, "Charliecloud: Unprivileged containers for user-defined software stacks," 2016. [2] A. L. B. Almeida, "A high performance Java middleware for general purpose computing and capacity planning," 2016. [3] M. Panczyk, "Improving computation efficiency by parallel programming," Актуальні проблеми економіки, no. 3, pp. 398-406, 2013. [4] H. Xiong, D. Zhang, C. Martyniuk, V. Trudeau, and X. Xia, "Using Generalized Procrustes Analysis (GPA) for normalization of cDNA microarray data," BMC Bioinformatics, vol. 9, 2008. [5] M. Åstrand, P. Mostad, and M. Rudemo, "Empirical Bayes models for multiple probe type microarrays at the probe level," BMC Bioinformatics, vol. 9, 2008. [6] J. Hill et al., "SPRINT: A new parallel framework for R," BMC Bioinformatics, journal article vol. 9, no. 1, p. 558, December 29 2008. [7] S. Calza, D. Valentini, and Y. Pawitan, "Normalization of oligonucleotide arrays based on the least-variant set of genes," BMC Bioinformatics, vol. 9, 2008. [8] A. Brazma et al., "ArrayExpress – a public repository for microarray gene expression data at the EBI," Nucl Acids Res, vol. 31, 2003. [9] G. Vera, R. Jansen, and R. Suppi, "R/parallel – speeding up bioinformatics analysis with R," BMC Bioinformatics, vol. 9, 2008. [10] H. Schwender and K. Ickstadt, "Empirical Bayes analysis of single nucleotide polymorphisms," BMC Bioinformatics, vol. 9, 2008 [11] C. Zee, "Overview of the MPI standard and Implementations," Universität Stuttgart, Alemania, 2004. [12] M. Dunning, N. Barbosa-Morais, A. Lynch, S. Tavaré, and M. Ritchie, "Statistical issues in the analysis of Illumina data," BMC Bioinformatics, vol. 9, 2008. [13] J. Bull, M. D. Westhead, M. Kambites, and J. Obdrzálek, "Towards OpenMP for java," in European Workshop on OpenMP (EWOMP 2000), 2000, vol. 39, p. 40. [14] A. Lastovetsky, "Adaptive parallel computing on heterogeneous networks with mpC," Parallel computing, vol. 28, no. 10, pp. 1369-1407, 2002. [15] A. Reinefeld, J. Gehring, and M. Brune, "Communicating across parallel message-passing environments," Computer Standards & Interfaces, vol. 20, no. 6-7, p. 427, 1999. [16] L. Poorthuis, K. Goergen, W. Sharples, and S. Kollet, "Implementation of parallel NetCDF in the ParFlow hydrological model: A code modernisation effort as part of a big data handling strategy," in NIC Symposium 2016, 2016, no. FZJ-2016-03551: Jülich Supercomputing Center. [17] D. Goswami, A. Singh, and B. R. Preiss, "Architectural Skeletons: The Re-Usable Building-Blocks for Parallel Applications," in PDPTA, 1999, pp. 1250-1256. [18] L. A. Drummond, V. G. Ibarra, V. Migallón, and J. Penadés, "Improving Ease of Use in BLACS and PBLAS with Python," in PARCO, 2005, pp. 325-332. [19] A. A. Datti, H. A. Umar, and J. Galadanci, "A Beowulf Cluster for Teaching and Learning," Procedia Computer Science, vol. 70, pp. 62-68, 2015. [20] L. Dalcin, "MPI for Python," ed: Release, 2010. [21] M. Bubak, D. Kurzyniec, and P. Luszczek, "Creating Java to native code interfaces with Janet extension," in Proceedings of the First Worldwide SGI Users’ Conference, 2000, pp. 283-294. [22] N. Brown, "ePython: An Implementation of Python for the Many-Core Epiphany Co-processor," in Python for High-Performance and Scientific Computing (PyHPC), Workshop on, 2016, pp. 59-66: IEEE. [23] D. J. Becker, T. Sterling, D. Savarese, J. E. Dorband, U. A. Ranawak, and C. V. Packer, "BEOWULF: A parallel workstation for scientific computation," in Proceedings, International Conference on Parallel Processing, 1995, vol. 95, pp. 11-14.