NERSCPowering Scientific Discovery Since 1974

Systems

hopper1.jpg

Systems Overview Table

Summary table of NERSC systems Read More »

hopper1

Hopper Cray XE6

Hopper is NERSC's first peta-flop system, a Cray XE6, with 153,216 compute cores, 217 TB of memory and 2PB of disk. Hopper placed number 5 on the November 2010 Top500 Supercomputer list. Read More »

carverracks.jpg

Carver IBM iDataPlex

Carver, named in honor of American scientist George Washington Carver, is an IBM iDataPlex system with 1,202 compute nodes. Each node contains two Intel Nehalem quad-core processors (9,984 processor cores total). The system's theoretical peak performance is 106.5 Tflop/s. Read More »

pdsf.jpg

PDSF

PDSF is a networked distributed computing environment used to meet the detector simulation and data analysis requirements of large scale Physics, High Energy Physics and Astrophysics and Nuclear Science investigations. Read More »

pdsf.jpg

Genepool

The Genepool system is a cluster dedicated to the JGI's computing needs. Phoebe is a smaller test system for Genepool. Read More »

euclid.png

Euclid Sun Sunfire Server

Euclid, named in honor of the ancient Greek mathematician, is an Sun Microsystems Sunfire x4640 SMP. Its single node contains eight 6 core Opteron 2.6 GHz processors with all 48 cores sharing the same 512 GBytes of memory. The system's theoretical peak performance is 499.2 Gflop/s. Read More »

hpss.png

HPSS data archive

The High Performance Storage System (HPSS) is a modern, flexible, performance-oriented mass storage system. It has been used at NERSC for archival storage since 1998. Read More »

Data Transfer Nodes

The data transfer nodes are NERSC servers dedicated to performing transfers between NERSC data storage resources such as HPSS and the NERSC Global Filesystem (NGF), and storage resources at other sites including the Leadership Computing Facility at ORNL (Oak Ridge National Laboratory). These nodes are being managed (and monitored for performance) as part of a collaborative effort between ESnet, NERSC, and ORNL to enable high performance data movement over the high-bandwidth 10Gb ESnet wide-area network (WAN). Read More »

Dirac.png

Dirac: GPU Computing

Dirac is a testbed GPU cluster funded in collaboration with the Computational Research Division at Berkeley Lab, using funding from the DOE/ASCR Computer Science Research Testbeds program (DOE Contract Number DE-AC02-05CH11231). This cluster consists of 48 nodes with attached Graphics Processing Units (GPUs) from NVIDIA named Tesla. Read More »

T3E.liquid

History of Systems

Established in 1974 at Lawrence Livermore National Laboratory, NERSC was moved to Berkeley Lab in 1996 with a goal of increased interactions with the UC Berkeley campus. Read More »

NERSC-8 Procurement

Update: Draft Technical Requirements were released to the vendor community December 17, 2012. NERSC-8 Procurement Overview The U.S. Department of Energy (DOE) Office of Science (SC) requires a high performance production computing system in the 2015/2016 timeframe to support the rapidly increasing computational demands of the entire spectrum of DOE SC computational research. The system needs to provide a significant upgrade in computational capabilities, with a target increase between 10-30… Read More »

TrinityN8Logo.gif

Trinity / NERSC-8 RFP

A draft of the Technical Requirements for the Trinity and NERSC-8 platforms have been released to the vendor community for comment.  The Draft Technical Requirements will be available for comment until 4PM Mountain Standard Time January 17, 2013.  A full RFP package is expected to be released in Q2 2013. LANL RFI Website RFI Cover Letter Trinity-NERSC-8 Draft Technical Requirements Interested Offerors must submit all communication (questions, comments, etc.) about the Trinity / NERSC-8 RFP to… Read More »