NERSCPowering Scientific Discovery Since 1974

Systems Overview Table

NERSC Computational Systems

 

System Name System Type CPU Computational Pool Node Interconnect Scratch Disk Avg. Power (KW)
 Type  Speed (GHz)  Nodes  SMP Size  Total Cores Flops per Core (Gflops/sec) Peak Performance (Tflops/sec)  Aggregate Memory

 Avg. Memory/core

Hopper Cray XE6 Opteron 2.1 6,384 24 153,216 8.4 1287.0 211.5 TB 1.41 GB Gemini 2.2 PB (local) + 1.1 PB (global)   2,200
Carver IBM iDataPlex Nehalem, Westmere, Nehalem-EX 2.67, 2.00 1,202 8, 12, 32 9,984 10.68, 8.00 106.5 35.75 TB 3.67 GB QDR InfiniBand 1.1 PB (global) 266 
Dirac* NVIDIA GPUs on IBM iDataPlex Tesla C2050 (Fermi) and C1060 with Nehalem 1.15 or 1.30 for GPUs and 2.4 for CPUs 56 GPU nodes on 50 CPU nodes 448 or 240 for GPUs, and 8 for CPUs 23,424 GPU cores and 400 CPU cores  1.15 or 1.30 for GPU cores and 9.6 for CPU cores  25.4 for GPUs and 3.8 for CPUs 176 GB for GPUs and 1,344 GB for CPUs 7.7 MB for GPUs and 3.36 GB for CPUs QDR InfiniBand 1.1 PB (global)  -
PDSF** Linux Cluster Opteron, Xeon 2.0, 2.27, 2.33, 2.67 194 8, 12 1,844  8.0, 9.08, 9.32, 10.68  17.6 6.4 TB 4 GB Ethernet 34.9 TB for batch nodes and 184 GB for interactive nodes  92
Genepool***  Various vendor systems Nehalem, Opteron 2.27, 2.67  547  8, 24, 32, 80  4,680  9.08, 10.68  42.8  33.7 TB  7.36 GB  Ethernet 1.1 PB (global)  -

 * Dirac is a testbed experimental system and is not considered a NERSC production system.

** PDSF is a special-use system hosted by NERSC for the High Energy Physics and Nuclear Science community.

*** Genepool is a cluster dedicated to the DOE Joint Genome Institute's computing needs. 

 See more information at NERSC computational systems.

 

NERSC File Systems

 The following table shows NERSC file systems, their size and availability on computational systems.

File System Size Hopper Carver Dirac PDSF Euclid Genepool Data Transfer Nodes

Global homes

246 TB

Y Y Y N Y Y Y
Global scratch  1.1 PB Y Y Y N Y Y Y
Global project  3.9 PB Y Y Y Y Y Y Y
Global projectb  2.6 PB Y  Y Y N N Y Y
Local scratch  - Y (2.2 PB) N N Y (34.9 TB for batch nodes) N Y N
Others - N N N   N Y (2.1 PB for /house and /ifs) N

 See more information at NERSC file systems.

 

NERSC HPSS (High Performance Storage System) Mass Storage Systems

 Storage System Usage Maximum Capacity Disk Cache Tape Drives Maximum Aggregate Bandwidth*
Archive (archive.nersc.gov) Storing user files 150 PB 250 TB 102 (Oracle STK 9840D: 26; Oracle STK T10KB: 50; and Oracle STK T10KC: 26) 12 GB/sec
Regent (hpss.nersc.gov) Computer system backups 90 PB 50 TB 42 (Oracle STK 9840D: 8; Oracle STK T10KB: 26; and Oracle STK T10KC: 8) 4 GB/sec

* An estimated total aggregate bandwidth of the disk subsystems only, not all the interface in the system.

See more information at NERSC data archive systems.