GROMACS user guide

This page describes how to run the GROMACS molecular dynamics software on uppmax systems. See the gromacs web page for more information.

Loading the GROMACS module

Load the gromacs module with

module load gromacs/2021.01.th

SBATCH script

adapted from HPC2N

#!/bin/bash
#SBATCH -A SNIC_project
#SBATCH -t 00:15:00
#SBATCH -p node -n 10
# Use 2 threads per task
#SBATCH -c 2
module load gromacs/2021.1.th
# Automatic selection of single or multi node based GROMACS
if [ $SLURM_JOB_NUM_NODES -gt 1 ]; then
GMX="gmx_mpi"
MPIRUN="mpirun"
ntmpi=""
else
GMX="gmx"
MPIRUN=""
ntmpi="-ntmpi $SLURM_NTASKS"
fi
# Automatic selection of ntomp argument based on "-c" argument to sbatch
if [ -n "$SLURM_CPUS_PER_TASK" ]; then
ntomp="$SLURM_CPUS_PER_TASK"
else
ntomp="1"
fi
# Make sure to set OMP_NUM_THREADS equal to the value used for ntomp
# to avoid complaints from GROMACS
export OMP_NUM_THREADS=$ntomp
echo $MPIRUN $GMX mdrun $ntmpi -ntomp $ntomp -s MEM.tpr -nsteps 10000 -resethway
$MPIRUN $GMX mdrun $ntmpi -ntomp $ntomp -s MEM.tpr -nsteps 10000 -resethway

Running older versions of GROMACS

Versions 4.5.1 to 5.0.4: 

The gromacs tools have been compiled serially. The mdrun program has also been compiled in parallel using MPI. The name of the parallel binary is mdrun_mpi.

Run the parallelized program using:

mpirun -np XXX mdrun_mpi

... where XXX is the number of cores to run the program on.

Version 5.1.1

The binary is gmx_mpi and (e.g.) the mdrun command is issued like this:

mpirun -np XXX gmx_mpi mdrun
Last modified: 2021-04-19