Configuration
Compute Nodes
The Genepool system is made up of a heterogeneous collection of nodes to serve the diverse workload of the JGI users. Below is a table of the current configuration.
# Nodes | Cores/node | Memory/node | Local disk | Processor | Hostname | Vendor |
---|---|---|---|---|---|---|
2 | 32 | 1000GB | 3.6TB | Xeon E5-4650L | mndlhm0205-ib,mndlhm0405-ib.nersc.gov | Appro |
5 | 32 | 500GB | 3.6TB | Xeon E5-4650L | mndlhm[01-05]03.nersc.gov | Appro |
222 | 16 | 120GB | 1.8TB | Xeon E5-2670 | mc01[55-72],mc02[01-68],mc04[01-68],mc05[01-64]-ib.nersc.gov | Appro |
450 | 8 | 48 GB | 1 TB | Intel Xeon L5520 2.27 GHz | sgi[01a01-06b40].nersc.gov | SGI |
64 | 8 | 48 GB | 500 GB | Intel Xeon L5520 2.27 GHz | quad[01-64].nersc.gov | SuperMicro |
20 | 8 | 144 GB | 512 GB | Intel Xeon L5520 2.27 GHz | x4170a[01-20].nersc.gov | Sun |
4 | 32 | 512 GB | 1 TB | AMD Opteron 2.28 GHz | gpht-[01-04].nersc.gov | Sun |
1 | 80 | 2 TB | 300 GB | Intel Xeon X7560 2.27 GHz | b2r2ibm1t-02.nersc.gov | IBM |
1 | 32 | 1 TB | 600 GB | Intel Xeon X7560 2.27 GHz | gptb-01.nersc.gov | Dell |
7 | 24 | 256 GB | 600 GB | Intel x7542 2.67 GHz | uv10-[1-7].nersc.gov | SGI |
Login Nodes
Genepool currently has four login nodes. Users land on one of the four login nodes when they ssh to genepool.nersc.gov. Soon the different login nodes will sit behind a load balancer and a user will land on the login node with the least number of connections. Currently users are assigned to a login node in round robin fashion. The names of the login nodes are genepool01, genepool02, genepool03 and genepool04, however users should always access the login nodes with ssh username@genepool.nersc.gov. The login nodes have 32 GB of RAM, have 2.3GHz processors and have 8 cores each.
Other Analysis Nodes
Genepool also has nodes which are used for pipeline control and pre- and post-processing of jobs as well as analysis jobs. At present, these nodes are allocated per group and are called gpintNN (NN = 01,02,...). Users should refer to the table below to determine which node belongs to their group and only use the nodes allocated to their group.
Node Name |
Legacy JGI Name |
Other Names |
Assigned Group |
gpint01.nersc.gov | one.jgi-psf.org | one.jgi-psf.org | R & D |
gpint02.nersc.gov | bcg1.jgi-psf.org | bcg1.jgi-psf.org | Comparative Genomics (Vista) |
gpint03.nersc.gov | bcg2.jgi-psf.org | bcg2.jgi-psg.org | Comparative Genomics (Vista) |
gpint04.nersc.gov | merced.jgi-psf.org | merced.jgi-psf.org | IMG |
gpint05.nersc.gov | img-worker.jgi-psf.org | img-worker.jgi-psf.org | IMG |
gpint06.nersc.gov | zeus.jgi-psf.org | zeus.jgi-psf.org | GBP |
gpint07.nersc.gov | ranger.jgi-psf.org | ranger.jgi-psf.org | Plant |
gpint08.nersc.gov | boiler.jgi-psf.org | boiler.jgi-psf.org | Plant |
gpint09.nersc.gov | sedona.jgi-psf.org | sedona.jgi-psf.org | Plant |
gpint10.nersc.gov | willow.jgi-psf.org | willow.jgi-psf.org | Plant |
gpint11.nersc.gov | actinium.jgi-psf.org | actinium.jgi-psf.org | Plant |
gpint12.nersc.gov | bat.jgi-psf.org | bat.jgi-psf.org | R & D |
gpint13.nersc.gov | quarter.jgi-psf.org | quarter.jgi-psf.org | Fungal |
gpint14.nersc.gov | chekov.jgi-psf.org | chekov.jgi-psf.org | General purpose |
gpint15.nersc.gov | stimpy.jgi-psf.org | stimpy.jgi-psf.org | OFFLINE |
gpint16.nersc.gov | ren.jgi-psf.org | ren.jgi-psf.org | General purpose |
gpint17.nersc.gov | thallium.jgi-psf.org | thallium.jgi-psf.org | General purpose |
gpint18.nersc.gov | indium.jgi-psf.org | indium.jgi-psf.org | General purpose |
gpint19.nersc.gov | gallium.jgi-psf.org | gallium.jgi-psf.org | General purpose |
gpint20.nersc.gov | cadmium.jgi-psf.org | cadmium.jgi-psf.org | General purpose |
gpint21.nersc.gov | itchy.jgi-psf.org | itchy.jgi-psf.org | General purpose |
gpint22.nersc.gov | wesley.jgi-psf.org | wesley.jgi-psf.org | General purpose |
Other Special Purpose Nodes
Nodes which perform specialized tasks such as running a web-server and/or databases are also part of Genepool, although they may not have all the features of login or analysis nodes (such as NGF filesystems, login access, etc.). These nodes are also currently assigned on a per-group basis. Use of these nodes is restricted to the groups to which they have been assigned.
Node Name |
Legacy JGI Name |
Other Names |
Group |
gpweb01.nersc.gov | vista.jgi-psf.org |
vista.jgi-psf.org, genome.lbl.gov, pga.lbl.gov, www-pga.lbl.gov, gsd.lbl.gov, www.gsd.lbl.gov, www-gsd.lbl.gov
|
Comparative Genomics (Vista) |
gpweb02.nersc.gov | hazelton.jgi-psf.org |
hazelton.jgi-psf.org, atgc.lbl.gov, chr16.lbl.gov, enhancer.lbl.gov, enhancer-test.lbl.gov, genome-test.lbl.gov, hazelton.lbl.gov, pipeline-test.lbl.gov, regprecise.lbl.gov regpredict.lbl.gov, rviewer.lbl.gov |
Comparative Genomics (Vista) |
gpweb03.nersc.gov | helix.jgi-psf.org |
pyrotagger.jgi-psf.org |
General web server |
gpweb04.nersc.gov | img-edge1.jgi-psf.org | IMG web server | |
gpweb05.nersc.gov | img-edge2.jgi-psf.org | IMG web server | |
gpweb06.nersc.gov | img-edge3.jgi-psf.org | IMG web server | |
gpweb07.nersc.gov | img-edge4.jgi-psf.org | IMG web server | |
gpweb08.nersc.gov | athena.jgi-psf.org |
geneprimp.jgi-psf.org, coal.jgi-psf.org, clams.jgi-psf.org, gold.jgi-psf.org, gold-dev.jgi-psf.org |
IMG/GBP web server |
gpweb09.nersc.gov | galaxy.jgi-psf.org | Galaxy web server | |
gpweb10.nersc.gov | galaxy-dev.jgi-psf.org | Galaxy Development web server | |
gpdb01.nersc.gov | lemur.jgi-psf.org | lemur.jgi-psf.org | Comparative Genomics (Vista) |
gpdb02.nersc.gov | RESERVED | RESERVED | RESERVED |
gpdb03.nersc.gov | RESERVED | RESERVED | RESERVED |
gpdb04.nersc.gov | RESERVED | RESERVED | RESERVED |
gpdb05.nersc.gov | polonium.jgi-psf.org | polonium.jgi-psf.org | Plant |
Interconnect
The majority of the compute nodes are connected with a 1 Gb/sec Gigabit Ethernet switch. A few nodes (details forthcoming) are connected via 10Gb/sec ethernet.
File Systems
Genepool will mount a number of file systems. See the file systems page for more details.
- Global homes
- /usr/common
- JGI 2.7PB GPFS file system "projectb"
- $SCRATCH
- /house
- /jgi/tools
Batch System
Genepool/Phoebe will use a fair share batch scheuler called "UGE". See our documentation on submitting jobs and queues and policies for more details.