Biowulf at the NIH
NAMD on Biowulf
namd

NAMD is a parallel molecular dynamics program for UNIX platforms designed for high-performance simulations in structural biology. It is developed by the Theoretical Biophysics Group at the Beckman Center, University of Illinois. NAMD is particularly well suited to Beowulf clusters, as it was specifically designed to run efficiently on parallel machines.

NAMD was developed to be compatible with existing molecular dynamics packages, especially the packages X-PLOR and CHARMM, so it will accept X-PLOR and CHARMM input files. The output files produced by NAMD are also compatible with X-PLOR and CHARMM.

Features of NAMD

At a minimum, NAMD requires

Details of the input and output files are in the NAMD user guide, and a sample session is at the end of this page.

Create a batch command file:

-----------------------------------------------------------------------------
Sample batch script -- /data/username/namd/namd_run
-----------------------------------------------------------------------------
#!/bin/csh
#PBS -N NAMD
#PBS -k oe
#PBS -m be
#
set path=(/usr/local/namd/ $path .)
cd $PBS_O_WORKDIR

set arch=`uname -m`
make-namd-nodelist                                     #required!!!
charmrun.$arch /usr/local/namd/namd2.$arch +p$np ++nodelist ~/namd.$PBS_JOBID bpti.inp >& bpti.out
rm ~/namd.$PBS_JOBID

Submit this job with the command:

qsub -v np=8 -l nodes=4:o2800 /data/username/namd/namd_run

This job will be run on 4 o2800 (2.8 GHz Opteron, Gigabit ethernet connectivity) nodes, using both processors of each node.

qsub -v np=16 -l nodes=8:o2200 /data/username/namd/namd_run

This job will be run on 8 o2200 (2.2 GHz Opteron) nodes, using both processors of each node.

Our benchmarks indicate that NAMD performs poorly on the Myrinet nodes. We do not recommend running NAMD over Myrinet.

A version of NAMD built for the Infiniband connected nodes is made available by the Biowulf staff. The IB version of the NAMD binary, namd.ib, will run only on the Infiniband nodes (not to be confused with the Infinipath version in the next section). Also note that the ib nodes have eight cores per node so be sure to adjust the processor-count accordingly.

Note: before using NAMD on the infiniband network, you must create an ~/.mpd.conf file with a secret word in it. You only need to do this once and it's not at all necessary to worry about maintaining it:

% echo 'password=password' > ~/.mpd.conf
% chmod 400 ~/.mpd.conf

Create a batch command file:

-----------------------------------------------------------------------------
Sample batch script -- /data/username/namd/namd_run
-----------------------------------------------------------------------------
#!/bin/bash
#PBS -N myjob
#PBS -k oe
#PBS -m be
#

export PATH=/usr/local/namd:$PATH
cd $PBS_O_WORKDIR

charmrun.ib `which namd2.ib` +p$np myjob.namd > out.log

Submit the job with the command:

qsub -v np=64 -l nodes=8:ib /data/username/namd/namd_run

This job will be run on 8 ib (2.8 GHz dual-socket, Quad-core Xeon, 16 Gb/s Infiniband connectivity) nodes, launching eight processes per node.

A version of NAMD has been built with the Pathscale compilers for the Infinipath network. An NAMD job with this version will require a slightly different job submission script, as below:

-----------------------------------------------------------------------------
Sample batch script -- /data/username/namd/namd_run.ib
-----------------------------------------------------------------------------
#!/bin/csh
#PBS -N myjob
#PBS -k oe
#PBS -m be
#
set path=(/usr/local/namd-ib $path .)
setenv LD_LIBRARY_PATH /usr/local/namd-ib/

cd $PBS_O_WORKDIR

charmrun /usr/local/namd-ib/namd2 +p$np myjob.conf >& myjob.log

Submit this job to the batch system:

qsub -v np=16 -l nodes=8:ipath namd_run.ib

This will submit the job to 8 Infinipath nodes, with 2 processors per node.

  1. Choosing the number of processors/nodes: When submitting a job with the qsub command, the number of processors (np=#) should be equal to the combined processor count on thost nodes. It is recommended that you do this unless the job requires more memory than can be accomodated by the node with all processors in action. The Benchmarks section of this page may help you to choose the appropriate nodes and number of processors.
  2. Nodelist:. When using the Ethernet version of NAMD, the batch command file must include the 'make-namd-nodelist' utility, which is in the directory /usr/local/bin/. (This step is not required for Infinipath jobs). The utility creates a file in your home directory called namd.PBS_JOBID, which is used when running the NAMD job. This file can be deleted at the end of the job, as in the batch scripts above.

Running Replica Exchange Simulations is a little different, since the namd2 binary has to be run in standalone mode and not via the charmrun script. The directory /usr/local/namd/replica/example contains a set of example replica exchange files.

  1. Set up a replica exchange configuration file along the lines of the one below. The sections that you will need to modify are in italics.

    ----------------- file fold_alanin.conf------------------------------------
    # configuration for replica exchange scripts
    # run simulation: tclsh ../replica_exchange.tcl fold_alanin.conf
    # to continue: tclsh ../replica_exchange.tcl restart_1.conf
    # view in VMD:  source fold_alanin.conf; source ../show_replicas.tcl
    # add continued:   source restart_1.conf; source ../show_replicas.tcl
    # show both:  vmd -e load_all.vmd
    
    set num_replicas 8
    set min_temp 300
    set max_temp 600
    set steps_per_run 1000
    set num_runs 10000
    # num_runs should be divisible by runs_per_frame * frames_per_restart
    set runs_per_frame 10
    set frames_per_restart 10
    set namd_config_file "alanin_base.namd"
    set output_root "output/fold_alanin" ; # directory must exist
    
    # the following used only by show_replicas.vmd
    set psf_file "alanin.psf"
    set initial_pdb_file "unfolded.pdb"
    set fit_pdb_file "alanin.pdb"
    
    set namd_bin_dir /usr/local/NAMD-2.6-Linux-amd64-TCP-icc
    set server_port 3177
    
    # NOTE:  Running namd2 through charmrun interferes with socket connections;
    # run the namd2 binary directly (in standalone mode).  MPI might work.
    
     set spawn_namd_command \
       [list spawn_namd_rsh "cd [pwd]; [file join $namd_bin_dir namd2] +netpoll" \
       [read [open $env(PBS_NODEFILE) "r"]] ]
    

  2. Set up a batch script:

    ---------file re.bat-------------------------
    #!/bin/csh
    #PBS -N NAMD_RE
    #PBS -j oe
    #PBS -m be
    
    cd /data/user/namd/re
    tclsh /usr/local/namd/replica/replica_exchange.tcl fold_alanin.conf
    

  3. Check that the output directory exists, otherwise the job will die. In this example, the output directory is /data/user/namd/re.

  4. Note the number of replicas in the config file. In the example above, in Step 1, there are 8 replicas. The number of nodes requested should be half of the number of replicas. Submit the job:
    qsub -l nodes=4:x86-64 re.bat
    

In our experience, most Ethernet NAMD jobs scale to 16 processors or fewer before the efficiency drops to below 70%. Some types of jobs scale better. Users who submit to more than 16 processors should justify this by running their own benchmarks, as detailed at the end of this section. You are welcome to submit your own benchmark runs to be added to this section; please send email to staff@helix.nih.gov.

apoa1

For this benchmark, NAMD scales to about 32 processors on the o2800 gige nodes, and to about 96 processors on the e2800 Infiniband nodes. The efficiency drops to below 70% when running on larger numbers of processors. Full benchmark details

Users who wish to run NAMD jobs on large numbers of processors should run their own benchmarks and verify that their particular job is at least 70% efficient on the desired number of nodes, as in the examples above. To run a benchmark, a short job should be run on 1, 2, 4, 8, 16, 32 etc. processors, and the days/ns time as reported in the NAMD output should be recorded. ('grep Bench' in your NAMD output to get the days/ns time). For NAMD benchmarks, the preferred parameter to record is the days/ns, not the walltime.

             100 * days/ns for 1 processor
Efficiency = -------------------------
             n * days/ns for n processors

This session uses the BPTI example that is described in the NAMD User Guide.

  1. Obtain the PDB file 6pti.pdb (you can copy it from /pdb/pdb/pt/pdb6pti.ent.gz and uncompress. The entire Protein Data Bank, updated nightly, is mirrored on the Helix Systems). Also obtain the appropriate topology and parameter files. These are available in /usr/local/charmm/c31b1/toppar/. A collection is available at http://www.pharmacy.umaryland.edu/faculty/amackere/force_fields.htm.

  2. Run psfgen to create the PSF file.

    $ cd /data/username/namd
    $ gunzip -c /pdb/pdb/pt/pdb6pti.ent.gz > 6pti.pdb
    $ grep -v '^HETATM' 6pti.pdb > 6pti_protein.pdb
    $ grep 'HOH' 6pti.pdb > 6pti_water.pdb
    $ cat > psfgen.inp << END
    topology /usr/local/charmm/c31b1/toppar/top_all22_prot.inp
    segment BPTI {
      pdb 6pti_protein.pdb
    }
    patch DISU BPTI:5 BPTI:55
    patch DISU BPTI:14 BPTI:38
    patch DISU BPTI:30 BPTI:51
    alias atom ILE CD1 CD
    coordpdb 6pti_protein.pdb BPTI
    alias residue HOH TIP3
    segment SOLV {
      auto none
      pdb 6pti_water.pdb
    }
    alias atom HOH O OH2
    coordpdb 6pti_water.pdb SOLV
    writepsf bpti.psf
    guesscoord
    writepdb bpti.pdb
    END
    $ /usr/local/namd/psfgen.x86_64 < psfgen.inp
    

  3. Create the NAMD configuration file -- the BPTI example is provided in the NAMD user guide.

  4. Create the batch command file, in this case called /data/username/namd/bpti.run

    ------------------------------------------------
    #!/bin/csh
    #PBS -N NAMD
    #PBS -k oe
    #PBS -m be
    #
    set path=(/usr/local/namd/ $path .)
    cd $PBS_O_WORKDIR
    set arch=`uname -m`
    
    make-namd-nodelist
    charmrun.${arch} /usr/local/namd/namd2.${arch} +p$np ++nodelist \
          ~/namd.$PBS_JOBID bpti.inp >& bpti.out
    rm ~/namd.$PBS_JOBID
    

  5. Decide on the number of processors, and submit the job with the command
    % qsub -v np=4 -l nodes=2:o2800 /data/username/namd/bpti.run
    

Features of NAMD

NAMD User Guide (also available in PDF -- 541KB)

Overview of NAMD and Molecular Dynamics (in PDF)