AMBER (Assisted Model Building with Energy Refinement) is a package of molecular simulation programs. AMBER contains a large number of of modules; note that only the sander modules and pmemd are parallelized. |
Version: Amber 10 (May 2008) and Amber 9 (July 2006)
The following builds of Amber are available on Biowulf:
Amber Version |
Executables in | Compiler | To be used on | Add this to path | |
10 | /usr/local/amber/exe | 64-bit | Pathscale | x86_64 gige nodes | /usr/local/mpich-ps64/bin |
10 | /usr/local/amber/exe.ib | 64-bit | Pathscale | IB nodes | no path changes needed |
9 | /usr/local/amber9/exe.mpich-ps64 | 64-bit | Pathscale | x86_64 gige nodes | /usr/local/mpich-ps64/bin |
9 | /usr/local/amber9/exe.ib | 64-bit | Pathscale | IB nodes (ib) | no path changes needed |
9 | /usr/local/amber9/exe.mpich-gm2k-pg | 32-bit | Portland Group | Myrinet nodes (myr2k) | /usr/local/mpich-gm2k-pg/bin |
9 | /usr/local/amber9/exe.mpich-pg | 32-bit | Portland Group | all gige nodes | /usr/local/mpich-pg/bin |
In your batch script, you need to set the appropriate path for the Amber version and nodes, as in the table above.
LEaP is a graphical builder of input files for AMBER modules. LEaP can be used via the Xwindows graphical interface xleap, or the terminal version tleap. To run xleap,
- Open an Xwindows session to Biowulf. (More information about Xwindows on Macs, Windows, and Unix desktop machines.)
- On Biowulf, type
biowulf% setenv AMBERHOME /usr/local/amber biowulf% /usr/local/amber/exe/xleap
You should see the xleap window appear, in which you can type any LEaP commands. - The AMBER tutorials have examples of setting up AMBER parameter/topology files using LEaP.
For basic information about setting up an Amber job, see the Amber manual and the Amber tutorials . Also see Batch Queuing System in the Biowulf user guide, especially the section on Running MPICH Jobs under Batch. If you plan to use the Infinipath or Myrinet version, see the appropriate sections.
Sample script for a Sander or pmemd run on gige nodes:
#!/bin/csh # This file is amber.run # #PBS -N sander #PBS -m be #PBS -k oe set path = (/usr/local/mpich-ps64/bin $path ) cd /data/user/amber/myproject date mpirun -machinefile $PBS_NODEFILE -np $np /usr/local/amber/exe/sander.MPI \ -i md.in -o md.out -p md.top -c md.coor -x md.crd -e md.en \ -inf md.info -r md.rst
For a pmemd run, the last line in the script would be replaced by:
mpirun -machinefile $PBS_NODEFILE -np $np /usr/local/amber/exe.mpich-ps64/pmemd \ -O -i myin -c my1.x -o myout.pmemd
This script is submitted with the command:
qsub -v np=4 -l nodes=2:o2800 /data/user/amber/amber.runThis job will be run on 2 o2800 nodes, using both processors of each node.
Before submitting jobs to the Infinipath nodes, you must copy a file to your ~/.ssh directory:
% cd ~/.ssh(If you get an error that the .ssh directory does not exist, type mkdir ~/.ssh to create the directory)
% cp /usr/local/etc/ssh_config_ib config % chmod 600 config(If you already have a ssh config file, you should append the contents of /usr/local/etc/ssh_config_ib to it). This ssh config needs to be done only once, before submitting your first IB job.
Sample script for a sander run on the Infinipath nodes
#!/bin/csh -f #PBS -N sander #PBS -m be #PBS -k oe cd /data/user/mydir time mpirun -machinefile $PBS_NODEFILE -np $np /usr/local/amber9/exe.ib/sander.MPI \ -O -i mdin -c inpcrd -p prmtop -o mdout.ib.$np
This job would be submitted with the command:
qsub -v np=16 -l nodes=8:ipath /data/user/amber/myscript
Sample script for a Sander or pmemd run using the Myrinet version and Myrinet nodes.
#!/bin/csh # This file is amber.myri.run # #PBS -N sander #PBS -m be #PBS -k oe set path = (/usr/local/mpich-gm2k-pg/bin $path ) cd $PBS_O_WORKDIR date mpirun -machinefile $PBS_NODEFILE -np $np /usr/local/amber9/exe.mpich-gm2k-pg/sander.MPI \ -i md.in -o md.out -p md.top -c md.coor -x md.crd -e md.en \ -inf md.info -r md.rst
To run a PMEMD job, the last line in the script would be replaced by something like:
mpirun -machinefile $PBS_NODEFILE -np $np /usr/local/amber/exe.mpich-gm2k-pg/pmemd \ -O -i myin -c my1.x -o myout.pmemd
qsub -v np=8 -l nodes=4:o2200:myr2k /data/user/amber/amber.myri.runThis job will be run on 2 o2200 Myrinet nodes, using both processors of each node.
It is important to determine the appropriate number of nodes on which to run your job. Not all jobs may scale equally well.
To determine the optimal number of nodes:
- Set up a small version of your job. e.g. if your project involves a simulation of 100 ps, set up a 1ps job.
- Submit this job to 2 processors (1 node), 4processors (2 nodes), 6 processors (3 nodes) ...
- Examine the results. You want to pick the number of processors with at
least 60% efficiency.
100 * Time on 1 processor --------------------------- = Efficiency n * Time on n processors
Efficiency for 4 processors = (100*794/(4*301) =~ 66%.
Efficiency for 8 processors = (100*794/8*217) =~ 46%
Thus, for this job it would be very inefficient to run on 8 processors. It would be best to submit to 2 o2800 processors.
Note that the scaling depends on the particular job and the type of node, so it is a good idea to run your own benchmarks.
You can use the Biowulf system monitors to watch your job. Click on 'List status of Running Jobs only', and then on your username in the resultant web page. This will pop up a small window showing the status of all nodes that are running your jobs, as in the picture on the right. In the ideal situation, all your nodes should be yellow (both processors used). Watch for green or red nodes. Clicking on a node will pop up a window with information about the processes running on that node.
The PBS batch system will write 2 files called JOBNAME.oJOBNUM and JOBNAME.eJOBNUM in your home directory (e.g. sander.o90763 and sander.e90763). They contain the standard output and standard error from your job. Most problems can be figured out by examining these files.
Factor IX benchmark from the Amber benchmark suite. [Details] (May 2008)
- AMBER tutorials
- AMBER 10 Manuals
- Amber 9 Manual
- The AMBER homepage at Scripps.
- AMBER Mail Reflector Archive at Vanderbilt University.