Biowulf at the NIH
RSS Feed
NAMD on Biowulf
namd

NAMD is a parallel molecular dynamics program for UNIX platforms designed for high-performance simulations in structural biology. It is developed by the Theoretical Biophysics Group at the Beckman Center, University of Illinois.

NAMD was developed to be compatible with existing molecular dynamics packages, especially the packages X-PLOR and CHARMM, so it will accept X-PLOR and CHARMM input files. The output files produced by NAMD are also compatible with X-PLOR and CHARMM.

NAMD is closely integrated with with VMD for visualization and analysis.

top

At a minimum, NAMD requires

Details of the input and output files are in the NAMD user guide, and a sample session is at the end of this page.

Chart of NAMD Interconnects and Versions

top

Chart shows available NAMD versions maintained by this staff. Older versions are available but not supported. Using the most recent version is advisable. Older versions are maintained for users that need to work on continuing projects with the same NAMD version or perhaps have need of some legacy functionality. Use of 32-bit NAMD versions is depreciated since there are no longer any 32-bit nodes on the cluster.

Ethernet Infiniband Infinipath
2.9 (x86_64) Available Available Available
2.8 (x86_64) Available Available Available
2.7 (x86_64) Available Available Available
2.6 (x86_64) Available Available Available
2.6 (i686) Available
N/A
N/A

On Biowulf, installations are arranged:

/usr/local/namd/[2.6|2.7|2.8|2.9]/[x86_64|i686]/[eth|ib|ipath]

according to version number, architecture and target interconnect respectivly. For instance, the 64-bit namd 2.8 binary for Infiniband is located in:

/usr/local/namd/2.8/x86_64/ib

 

top

This is the preferred network for running most MD jobs, including NAMD. The following sample would run a simple job using the Infiniband network for message passing:

Create a batch command file:

#!/bin/bash
#PBS -N myjob
#PBS -k oe
#PBS -m be
#

NAMD_VER=2.9
NETWORK=ib
TRANS=ibverbs

PATH=/usr/local/NAMD/$NAMD_VER/$NETWORK/$TRANS:$PATH

cd $PBS_O_WORKDIR

# Create host file (required)
make-namd-nodelist
charmrun ++nodelist ~/namd.$PBS_JOBID ++p $np `which namd2` myjob.namd > out.log
rm -f ~/namd.$PBS_JOBID


Submit the job with the command:

qsub -v np=64 -l nodes=8:ib /data/username/namd/namd_run

This job will be run on eight Infiniband-connnected nodes, launching eight processes per node.

top

The following sample would run a simple job using the Infinipath network for message passing.

Create a batch command file:

#!/bin/bash
#PBS -N myjob
#PBS -k oe
#PBS -m be
#

NAMD_VER=2.9
NETWORK=ipath

PATH=/usr/local/NAMD/$NAMD_VER/$NETWORK:$PATH
PATH=/usr/local/OpenMPI/current/gnu/ipath/bin:$PATH

cd $PBS_O_WORKDIR

`which mpirun` -n ${np} `which namd2` myjob.namd > out.log

Submit the job with the command:

qsub -v np=16 -l nodes=8:ipath /data/username/namd/namd_run

This job will be run on 8 Infinipath-connnected nodes, launching two processes per node.

top

Sample batch command file for submitting NAMD jobs to the cluster using Ethernet as the message-passing network.

#!/bin/bash
#PBS -N NAMD
#PBS -k oe
#PBS -m be

NAMD_VER=2.9
NETWORK=eth

PATH=/usr/local/namd/$NAMD_VER/$NETWORK:$PATH
cd $PBS_O_WORKDIR

# Create host file (required)
make-namd-nodelist
charmrun ++p $np ++nodelist ~/namd.$PBS_JOBID `which namd2` config.namd >& outputlog
rm ~/namd.$PBS_JOBID

Submit this job with the command:

qsub -v np=48 -l nodes=2:x2800 /data/username/namd/namd_run

This job will be run on 2 x2800 (2.8 GHz Xeon X5660, gigabit Ethernet connectivity) nodes, launching 24 processes per node.

See the NAMD on GPUs page

NAMD v 2.7 and 2.8 have also been built with Plumed 1.3, a plugin for free energy calculations in molecular systems. (Plumed website) Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD.

Sample batch script:

#!/bin/bash

#PBS -N myjob
#PBS -m be

module load namd/2.8+plumed1.3
cd $PBS_O_WORKDIR

`which mpirun`  -machinefile $PBS_NODEFILE -np $np `which namd2`  apoa1.namd >& outputlog

This batch script could be submitted with:

qsub -v np=32 -l nodes=2:x2800 ./run

Additional software transports are available for the Infiniband and Ethernet networks. Users will not typically want to use these transports because the choices listed in the above examples are almost always the optimal choices. There are situations that may make an other transport desirable.

top
  1. Choosing the number of processors/nodes: When submitting a job with the qsub command, the number of processors (np=#) should be equal to the combined processor count on thost nodes. It is recommended that you do this unless the job requires more memory than can be accomodated by the node with all processors in action. The Benchmarks section of this page may help you to choose the appropriate nodes and number of processors.
  2. Nodelist:. When using the Ethernet version of NAMD 2.6, the batch command file must include the 'make-namd-nodelist' utility, which is in the directory /usr/local/bin/. (This step is not required for Infinipath or Infiniband jobs). The utility creates a file in your home directory called namd.PBS_JOBID, which is used when running the NAMD job. This file can be deleted at the end of the job, as is in the Ethernet batch script example above.
Replica Exchange in v 2.9 uses MPI rather than TCL, therefore the batch scripts are different. v2.9 Replica Exchange jobs will run on IB, Ipath or GigE nodes. Since Replica Exchange jobs do not have as much inter-node communication as regular NAMD jobs, the low latency of the IB nodes is probably not very useful, and users will get better performance on the newer faster x2800s or e2666 gige nodes.

Sample session: Copy the sample scripts from /usr/local/NAMD/2.9/ib/lib/replica/example into a directory. Set up a batch script along the following lines:

#!/bin/bash
#PBS -N Replica

# set up the paths for OpenMPI and NAMD
# use 'module load namd/2.9-re-ib' for IB
# use 'module load namd/2.9-re-ipath' for Ipath

module load namd/2.9-re-eth

cd $PBS_O_WORKDIR

# a clean output directory is required, otherwise the job will exit with errors
rm -rf output; mkdir output; (cd output; mkdir 0 1 2 3 4 5 6 7)

`which mpirun` -machinefile $PBS_NODEFILE -np $np `which namd2` +replicas $numreps \ 
      job0.conf +stdout output/%d/job0.%d.log

Submit this job with, for example:

 qsub -v np=48,numreps=8 -l nodes=2:x2800 re.bat  (Ethernet, 2 x2800 nodes, 48 total cores)
 qsub -v np=16,numreps=8 -l nodes=2:ib re.bat     (IB, 2 nodes with 16 cores total)
 qsub -v np=16,numreps=8 -l nodes=8:ipath re.bat  (Ipath, 8 nodes with 16 cores total)
where np is the number of processors (same as MPI ranks in the NAMD docs) and numreps is the number of replicas. Note that the value of np should be a multiple of numreps. Use 'freen' to see the number of cores on any type of node.

top

Running Replica Exchange Simulations is a little different, since the namd2 binary has to be run in standalone mode and not via the charmrun script. The directory /usr/local/NAMD/2.8/lib/replica/example contains a set of example replica exchange files.

  1. Set up a replica exchange configuration file along the lines of the one below. The sections that you will need to modify are in italics.

    ----------------- file fold_alanin.conf------------------------------------
    # configuration for replica exchange scripts
    # run simulation: tclsh ../replica_exchange.tcl fold_alanin.conf
    # to continue: tclsh ../replica_exchange.tcl restart_1.conf
    # view in VMD:  source fold_alanin.conf; source ../show_replicas.tcl
    # add continued:   source restart_1.conf; source ../show_replicas.tcl
    # show both:  vmd -e load_all.vmd
    
    set num_replicas 8
    set min_temp 300
    set max_temp 600
    set steps_per_run 1000
    set num_runs 10000
    # num_runs should be divisible by runs_per_frame * frames_per_restart
    set runs_per_frame 10
    set frames_per_restart 10
    set namd_config_file "alanin_base.namd"
    set output_root "output/fold_alanin" ; # directory must exist
    
    # the following used only by show_replicas.vmd
    set psf_file "alanin.psf"
    set initial_pdb_file "unfolded.pdb"
    set fit_pdb_file "alanin.pdb"
    
    set namd_bin_dir /usr/local/NAMD-2.6-Linux-amd64-TCP-icc
    set server_port 3177
    
    # NOTE:  Running namd2 through charmrun interferes with socket connections;
    # run the namd2 binary directly (in standalone mode).  MPI might work.
    
     set spawn_namd_command \
       [list spawn_namd_rsh "cd [pwd]; [file join $namd_bin_dir namd2] +netpoll" \
       [read [open $env(PBS_NODEFILE) "r"]] ]
    

  2. Set up a batch script:

    ---------file re.bat-------------------------
    #!/bin/csh
    #PBS -N NAMD_RE
    #PBS -j oe
    #PBS -m be
    
    cd /data/user/namd/re
    tclsh /usr/local/NAMD/2.8/lib/replica/replica_exchange.tcl fold_alanin.conf
    

    To use a different version of NAMD, replace the path /usr/local/NAMD/2.8/lib/replica/ in the script above. All versions of NAMD reside in /usr/local/NAMD/

  3. Check that the output directory exists, otherwise the job will die. In this example, the output directory is /data/user/namd/re.

  4. Note the number of replicas in the config file. In the example above, in Step 1, there are 8 replicas. The number of nodes requested should be half of the number of replicas. Submit the job:
    qsub -l nodes=4 re.bat
    
top

NAMD jobs will scale differently according to message-passing interconnect, system size, NAMD version and any number of other factors. However, in general, when running on Ethernet, jobs of 16 to 32 processors will often be the limit of reasonable scaleability. On Infininand or Infinipath, jobs generally scale-out to as many processors as you can get your hands on - with the exception of jobs simulating very small systems.

Users are encouraged to run their own benchmarks to determine the most efficient way to run their jobs. NAMD runs can be configured to provide benchmark information for this type of tuning. For scaling purposes, we recommend an efficiency rating of 70% or greater when performing long job runs on large numbers of processors. Lower than 70% efficiency in a long-running job run should be viewed as a poor use of resources. A simple formula for determining job-scaling efficiency looks like this:

       t1
e = --------
     n * t2

Where e is efficiency, n is the number of processors running the simulation, t1 is the performance time running on one processor and t2 is the performance time running on n processors. See the NAMD benchmark page for more detailed benchmark information.


top

This session uses the BPTI example that is described in the NAMD User Guide.

  1. Obtain the PDB file 6pti.pdb (you can copy it from /pdb/pdb/pt/pdb6pti.ent.gz and uncompress. The entire Protein Data Bank, updated nightly, is mirrored on the Helix Systems). Also obtain the appropriate topology and parameter files. These are available in /usr/local/charmm/c31b1/toppar/.
  2. Run psfgen to create the PSF file.

    $ export PATH=/usr/local/namd/2.7/x86_64/eth:$PATH
    $ cd /data/username/namd
    $ gunzip -c /pdb/pdb/pt/pdb6pti.ent.gz > 6pti.pdb
    $ grep -v '^HETATM' 6pti.pdb > 6pti_protein.pdb
    $ grep 'HOH' 6pti.pdb > 6pti_water.pdb
    $ cat > psfgen.inp << END
    topology /usr/local/charmm/c31b1/toppar/top_all22_prot.inp
    segment BPTI {
      pdb 6pti_protein.pdb
    }
    patch DISU BPTI:5 BPTI:55
    patch DISU BPTI:14 BPTI:38
    patch DISU BPTI:30 BPTI:51
    alias atom ILE CD1 CD
    coordpdb 6pti_protein.pdb BPTI
    alias residue HOH TIP3
    segment SOLV {
      auto none
      pdb 6pti_water.pdb
    }
    alias atom HOH O OH2
    coordpdb 6pti_water.pdb SOLV
    writepsf bpti.psf
    guesscoord
    writepdb bpti.pdb
    END
    $ psfgen < psfgen.inp
    

  3. Create the NAMD configuration file -- the BPTI example is provided in the NAMD user guide.

  4. Create the batch command file, in this case called /data/username/namd/bpti.run

    ------------------------------------------------
    #!/bin/bash
    #PBS -N NAMD
    #PBS -k oe
    #PBS -m be
    
    NAMD_VER=2.7  # Desired NAMD version (2.6, etc)
    ARCH=`uname -m` # System architecture
    NETWORK=eth     # Network for message passing
    
    PATH=/usr/local/namd/$NAMD_VER/$ARCH/$NETWORK:$PATH
    cd $PBS_O_WORKDIR
    
    # Create host file (required)
    make-namd-nodelist
    charmrun `which namd2` +p$np ++nodelist ~/namd.$PBS_JOBID bpti.inp >& bpti.out
    rm ~/namd.$PBS_JOBID
    

  5. Decide on the number of processors, and submit the job with the command
    % qsub -v np=4 -l nodes=2:o2800 /data/username/namd/bpti.run
    
top

The NAMD 2.9 user guide.

The NAMD 2.8 user guide.

The NAMD 2.7 user guide.

The NAMD 2.6 user guide.