Skip to content

OpenMPI

OpenMPI is a open-source implementation of the Message Passing Interface (MPI) standard, which designed for high-performance parallel computing. Supporting distributed-memory systems, clusters, and supercomputers and enabling efficient communication between processes in parallel applications.


Key Features

  • Efficiently handles small to large-scale clusters
  • Optimized for low-latency, high-throughput communication
  • Supports multiple programming languages: C, C++, Fortran

Use Cases

  • Scientific Computing
  • Simulations in physics, chemistry, and biology
  • Large-scale numerical computations
  • Big Data and Machine Learning
  • Distributed training of models
  • Parallel processing of large datasets

Example source code path → /home/$USER/job_template/C/mpi_pi.c

// This program is to calculate PI using MPI
// The algorithm is based on integral representation of PI. If f(x)=4/(1+x^2), then PI is the integral of f(x) from 0 to 1
#include <stdio.h>
#include <mpi.h>
#define N 1E9
#define d 1E-9
#define d2 1E-18
int main (int argc, char* argv[])
{
int rank, size, error, i;
double pi=0.0, result=0.0, sum=0.0, begin=0.0, end=0.0, x2;
error=MPI_Init (&argc, &argv);
//Get process ID
MPI_Comm_rank (MPI_COMM_WORLD, &rank);
//Get processes Number
MPI_Comm_size (MPI_COMM_WORLD, &size);
//Synchronize all processes and get the begin time
MPI_Barrier(MPI_COMM_WORLD);
begin = MPI_Wtime();
//Each process will calculate a part of the sum
for (i=rank; i<N; i+=size)
{
x2=d2*i*i;
result+=1.0/(1.0+x2);
}
//Sum up all results
MPI_Reduce(&result, &sum, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
//Synchronize all processes and get the end time
MPI_Barrier(MPI_COMM_WORLD);
end = MPI_Wtime();
//calculate and print PI
if (rank==0)
{
pi=4*d*sum;
printf("np=%2d; Time=%fs; PI=%lf\n", size, end-begin, pi);
}
error=MPI_Finalize();
return 0;
}

For more information about OpenMPI, please refer to OpenMPI Official Site