T3 NUO Part 2.10, MPI Parallelization

Distributed Memory

  • MPI - most popular

mpicc fooMPI.c -o fooMPI.x
mpifort fooMPI.f90 -o fooMPI.x

  • OpenMPI

  • openmpi/gcc/2.1.0

  • openmpi/intel/2.1.0

mpiicc fooMPI.c -o fooMPI.x
mpiifort fooMPI.f90 -o fooMPI.x

  • INTELMPI

  • included in intel/PS2017-17.0.4(compute and legacy)

Sets up Number Cores
28 cores per nodes, 84 total cores
, three compute nodes

rfj ccp0051@t3-login2:- / te;t/ scripl5
[ccp0051@    t 3 - login2    scripts]$ cat test.job #!/bin/bash
#SBATCH -n 84 ### Num of CPUs/cores
#SBATCH --ntask-per-core 28

#SBATCH -p compute ###(always compute)
#SBATCH --qos general ###QOS (debug,general, large) #SBATCH -J hello### Jobname
#SBATCH --mail-user=charles.peterson@unt .edu
#SBATCH --mail-type=begin
#SBATCH --mail-type=end

module load intel/PS2017-17.0.4-compute ##SETTING UP OMP VARIABLES
export OMP_NUM_THREADS=l

##Running code
rnpir u n /home/ccp0    OSl/apps/fooMPI .x

[ccp0051@t3-login2 scripts]$ squeue test.job I