Biowulf at the NIH
Gromacs on Biowulf
gromacs

GROMACS (www.gromacs.org) is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

GROMACS manual, downloadable in several formats.

Versions

The following versions of Gromacs are available on Biowulf.
Gromacs Version To be used on Add this to path Parallel executable
4.0.3 gige or ib nodes /usr/local/gromacs/bin
/usr/local/openmpi/bin
mdrun_mpi
4.0.3 ipath nodes /usr/local/gromacs/bin
/usr/local/openmpi-ipath/bin
mdrun_mpi
3.3.3 all gige nodes /usr/local/gromacs-3.3.3/bin
/usr/local/mpich/bin
mdrun_mpi
3.3.3 ipath nodes /usr/local/gromacs-3.3.3/bin-ipath mdrun_mpi

Submitting a GROMACS 4.0.* job

For basic information about setting up GROMACS jobs, read the GROMACS documentation. A collection of sample jobs is in /usr/local/gromacs/share/tutor.

Biowulf nodes have 2 (default gige nodes, ipath nodes), 4 (dual-core 'dc' nodes) or 8 (IB nodes) processors. The number of processors should be appropriately chosen depending on the type of node.

Gromacs 4.* is significantly different from Gromacs 3.* versions. The grompp and mdrun_mpi commands now require fewer parameters, as in the example below. Please see the Gromacs documentation for more information.

Sample script for a GROMACS 4.0.* run on gige or ib:

#!/bin/bash
# this file is Run_Gromacs
#PBS -N Gromacs
#PBS -k oe
#PBS -m be

# set up PATH for gige or ib nodes
export PATH=/usr/local/openmpi/bin:/usr/local/gromacs/bin:$PATH

cd /data/user/my_gromacs_dir

grompp > outfile 2>&1
/usr/local/openmpi/bin/mpirun -machinefile $PBS_NODEFILE -np $np \
     /usr/local/gromacs/bin/mdrun_mpi >> outfile 2>&1

Note that with Gromacs 4.0.*, the '-np #' parameter appears in only one place in the mdrun command. This is a change from Gromacs 3.0* where the -np flag was required for the mpirun command and again for the mdrun command.

Gromacs 4.0.3 has been built with openmpi, and the same binary will work on gige, ib, or ipath nodes. However, the PATH in the script above needs to be set differently for ipath nodes:

# set up PATH for ipath nodes
export PATH=/usr/local/openmpi-ipath/bin:/usr/local/gromacs/bin:$PATH

The script can be submitted with the qsub command. The number of processors should be chosen appropriately depending on the type of node.

Submitting to IB nodes (8 processors per node):

qsub -v np=32 -l nodes=4:ib Run_Gromacs

Submitting to Ipath nodes (2 processors per node):

qsub -v np=16 -l nodes=8:ipath Run_Gromacs 

Submitting to gige nodes (2 processors per node):

qsub -v np=4 -l nodes=2 Run_Gromacs

Submitting to dual-core gige nodes (4 processors per node):

qsub -v np=8 -l nodes=2:dc Run_Gromacs

Gromacs 4.0.3 on Myrinet

Note that Gromacs runs much faster on Infiniband or Infinipath. Benchmarks for Gromacs 4.0.3 on Myrinet are shown in the benchmark table. This is expected to be the last version of Gromacs built for Myrinet, since the Myrinet nodes will eventually be retired.

Sample batch script:

#!/bin/bash
# this file is Run.myr

export PATH=/usr/local/mpich-gm2k/bin:/usr/local/gromacs-4.0.3-myr2k/bin:$PATH

cd /data/user/mydir

grompp > output 2<&1
mpirun -machinefile $PBS_NODEFILE -np $np \
           /usr/local/gromacs-4.0.3-myr2k/bin/mdrun_mpi_myr >> output 2>&1

Submit this job with:

qsub -v np=8 -l nodes=4:o2200:myr2k /data/user/gromacs/Run.myr

Gromacs 3.3.3 on Infinipath

It is assumed that most users will use the latest version of Gromacs in /usr/local/gromacs. This section is provided for users who wish to complete projects using Gromacs 3.3.3. Note that the Gromacs 3.3.3 Infinipath build is in /usr/local/gromacs-3.3.3/bin-ipath/.

Sample script for a Gromacs 3.3.3 job submitted to the Infinipath/Opteron nodes:

#!/bin/csh
# This is file Run_Gromacs
#PBS -N Gromacs
#PBS -k oe
#PBS -m be
#
set path = (/usr/local/gromacs-3.3.3/bin-ipath $path .)
cd /data/user/my_gromacs_dir

grompp -np $np -shuffle -sort -f Grompp.mdp -c Conf.gro  -p Topol.top \
       -o topol.tpr >>&! output

mpirun -machinefile $PBS_NODEFILE -np $np /usr/local/gromacs-3.3.3/bin-ipath/mdrun_mpi \
       -np $np -s topol.tpr -o traj.trr  -c out.after_md -v >>&! output

This script would be submitted with:

qsub -v np=16 -l nodes=8:ipath Run_Gromacs

Gromacs 3.3.3 on gige

It is assumed that most users will use the latest version of Gromacs in /usr/local/gromacs. This section is provided for users who have half-completed projects using Gromacs 3.3.3. Note that the Gromacs 3.3.3 gige build is in /usr/local/gromacs-3.3.3/.

Sample script for a Gromacs 3.3.3 job submitted to the gige (including dc) nodes:

#!/bin/csh
#  This is file Run_Gromacs
#PBS -N GROMACS
#PBS -k oe
#PBS -m be
#
set path = (/usr/local/mpich/bin /usr/local/gromacs-3.3.3/bin $path .)
cd /data/username/my_gromacs_runs/xyz/

grompp -np $np -shuffle -f md -c cpeptide_b4md  -p cpeptide \
      -o cpeptide_md >& ! out.run

mpirun -machinefile $PBS_NODEFILE -np $np /usr/local/gromacs-3.3.3/bin/mdrun_mpi \
      -np $np -s cpeptide_md -o cpeptide_md -c cpeptide_after_md -v >>& ! out.run

Note that you need the '-np $np' flag in two places on the last line; one for the mpirun command and one for the mdrun program. This job would be submitted with:

qsub -v np=4 -l nodes=2:o2800 Run_Gromacs

Replica Exchange with Gromacs 4.0*

Details about running replica exchange with Gromacs are on the Gromacs Wiki. Multiple tpr files need to be generated from multiple *.mdp files with different temperatures. Below is a sample script for generating the tpr files. (courtesy Jeetain Mittal, NIDDK)

#!/bin/csh -f 

set ff = $argv[1]
set s = $argv[2]
set proot = 2f4k
#set ff = amber03d
set i = 0

while ( $i < 40 ) 

set fileroot = "${proot}_${ff}"
set this = "trexr"

if ( $s == 1 ) then 
set mdp = "mdp/trex_ini${i}.mdp"
set gro = "unfolded.gro" 
else
set sprev = $s
@ sprev--
set mdp = "mdp/trex_cont${i}.mdp"
set gro = "data/gro/${fileroot}_${this}_s${sprev}_nd${i}.gro"
endif

# 40 rep
grompp -v -f $mdp -c $gro \
-o data/tpr/${fileroot}_${this}_nd${i}.tpr \
-p ${fileroot}_ions.top

@ i++
end

Gromacs 4.0 can run with each replica on multiple processors. It is most efficient to run each replica on a dual-core node using all the processors on that node. This requires creating a specialized list of processors with the command make-gromacs-nodefile-dc (which is in /usr/local/bin) as in the sample script below.

#!/bin/bash
# this file is Run_Gromacs_RE
#PBS -N Gromacs_RE
#PBS -k oe
#PBS -m be

# set up PATH for gige or ib nodes
export PATH=/usr/local/openmpi/bin:/usr/local/gromacs/bin:$PATH

cd /data/user/my_gromacs_dir

#create the specialized list of processors for RE
make-gromacs-nodefile-dc

/usr/local/openmpi/bin/mpirun -machinefile ~/gromacs_nodefile.$PBS_JOBID \
      -np $np /usr/local/gromacs/bin/mdrun_mpi \ 
      -multi $n -replex 2000 >> outfile 2>&1

Submit this script to the dual-core nodes with:

qsub -v np=128,n=32 -l nodes=32:dc Run_Gromacs_RE

The above command will submit the job to 32 dual-core (either o2800 or o2600) nodes. Each of the 32 replicas will run on all 4 processors of each node. The number of processors (np=128) and the number of nodes (n=32) is passed to the program via the -v flag in qsub.

Optimizing your Gromacs job

It is critical to determine the appropriate number of nodes on which to run your job. As shown in the benchmarks below, different jobs scale differently. Thus, one job which scales very well could be submitted on up to 10 nodes, while another job may scale only up to 2 nodes. For some jobs, if you submit to more nodes than is optimal, your job will actually run slower.

To determine the optimal number of nodes:

Monitoring your jobs
Benchmarks

Summary:

The DPPC membrane system from the Gromacs benchmark suite. Detailed results

dppc_small

Benchmarks for Gromacs 3.3.3.

Benchmarks for Gromacs 3.3.1