Skip to content

OpenMPI (General)

To initiate user’s SSH key, run below command in the SSH terminal (CLI), for tutorial in accessing CLI, please refer to SSH Shell Access to EdUHK HPC Platform and Cluster (Web-based Shell Access)

# Run init-ssh-key
$ init-ssh-key

OPENMPI_GENERAL_1


The output Access denied… is normal, we only need adding host to known host list


For the example source code please refer to MPI Program Source Code (Example)


Pre-configured template script path → /home/$USER/job_template/slurm_job/mpi_pi.sh

#!/bin/bash
#SBATCH --job-name=mpi_pi ## Job Name
#SBATCH --partition=shared_cpu ## Partition for Running Job
#SBATCH --nodes=2 ## Number of Compute Node
#SBATCH --ntasks=2 # Number of Tasks
#SBATCH --cpus-per-task=2 ## Number of CPU per task
#SBATCH --time=60:00 ## Job Time Limit (i.e. 60 Minutes)
#SBATCH --mem=10GB ## Total Memory for Job
#SBATCH --output=./%x%j.out ## Output File Path
#SBATCH --error=./%x%j.err ## Error Log Path
## Initiate Environment Module
source /usr/share/modules/init/profile.sh
## Reset the Environment Module components
module purge
## Load Module
module load openmpi/5.0
# Generate a hostfile with unique hostnames
srun hostname -s | sort | uniq > hostfile.${SLURM_JOB_ID}
## Run user command
mpicc -o ./pi /home/${USER}/job_template/C/mpi_pi.c
mpirun -np 4 --hostfile hostfile.${SLURM_JOB_ID} ./pi
# Clean up
rm hostfile.$SLURM_JOB_ID
rm pi
## Clear Environment Module components
module purge

Guides for submitting HPC job, please refer to: HPC Job Submission (For CLI) and HPC Job Submission (For Web Portal)