NERSCPowering Scientific Discovery Since 1974

Edison

Edison4

NERSC's newest supercomputer, named Edison after U.S. inventor and businessman Thomas Alva Edison, will have a peak performance of more than 2 petaflops (PF, or 1015 floating point operations per second) when fully installed in 2013. The integrated storage system will have more than 6 petabytes (PB) of storage with an I/O bandwidth of 140 gigabytes (GB) per second. The architecure is known as Cray XC30 (internal name "Cascade"), and the NERSC acquistion project is known as "NERSC 7."

Edison will be installed in two phases. 

Phase I

Installation: 4Q 2012
Early User Access: Targeted for February 2013

System Overview

  • Cray Cascade supercomputer
  • 664 computes nodes with 64 GB memory per node
  • Two 8-core Intel "Sandy Bridge" processors per node (16 cores per node)
  • 10,624 total physical compute cores
  • Cray Aries high-speed interconnect (0.25 μs to 3.7 μs MPI latency, ~8 GB/sec MPI bandwidth)
  • Scratch storage capacity: 1.62 PB

System Details

  • Compute processor: 8-core Intel "Sandy Bridge" at 2.6 GHz
  • Compute node: dual-socket Sandy Bridge with 64 GB DDR3 1600 MHz memory (8 GB DIMMs)
  • Compute blade: 4 dual-socket nodes
  • Number of compute nodes: 664
  • "MOM" nodes (execute job scripts): 8 repurposed compute nodes
  • High speed interconnect: Cray Aries with Dragonfly topology
  • Scratch storage system: Cray Sonexion 1300 Lustre appliance
  • Scratch storage maximum bandwidth: 35 GB/sec
  • Login nodes: quad-core, quad-socket (16 total cores) 2.0 GHz Intel "Sandy Bridge" processors with 512 GB memory.
  • Number of login nodes: 6
  • Shared root server nodes: 8
  • Lustre router nodes: 7
  • DVS server nodes (for interface with NERSC Global File System): 16
  • External gateway (network nodes): 4 nodes with 2 dual-port 10 GigE interfaces per node

 

Phase 2

Installation: 2013

System Overview

    • Cray Cascade supercomputer
    • Sustained application performance on NERSC SSP codes: 236 Tflop/s (vs. 144 Tflop/s for Hopper)
    • Aggregate memory: 333 TB
    • 5,200 computes nodes with 64 GB memory per node
    • Cray Aries high-speed interconnect (0.25 μs to 3.7 μs MPI latency, ~8GB/sec MPI bandwidth)
    • Scratch storage capacity: 6.4 PB

System Details

    • Intel multicore processors
    • Compute blade: 4 dual-socket nodes
    • Number of compute nodes: 5,200
    • "MOM" nodes (execute job scripts): 8 repurposed compute nodes
    • High speed interconnect: Cray Aries with Dragonfly topology
    • Scratch storage system: Cray Sonexion 1600 Lustre appliance
    • Scratch storage maximum aggregate bandwidth: 140 GB/sec
    • Login nodes: quad-core, quad-socket (16 total cores) 2.0 GHz Intel "Sandy Bridge" processors with 512 GB memory.
    • Number of login nodes: 12
    • Shared root server nodes: 8
    • Lustre router nodes: 7
    • DVS server nodes (for interface with NERSC Global File System): 16
    • External gateway (network nodes): 4 nodes with two dual-port 10 GigE interfaces per node

 

Getting Started on Edison

How to get running on Edison for first-time users. Read More »

Programming

Quick Start for Hopper Users You should be able to compile codes the same way you do on Hopper unless your code in some way relies on the PGI compilers. The default programming environment on Edison uses the Intel compiler suite. Cray and GNU compilers are also available. PGI and Pathscale compilers are not installed. Overview Cray provides a convenient set of wrapper commands that should be used in almost all cases for compiling and linking parallel programs. Invoking the wrappers will… Read More »

File Storage and I/O

Disk Quota Change Request Form Edison File Systems The Edison system has 5 different file systems mounted which provide different levels of disk storage, I/O performance and file permanence.  The table below describes the various Edsion file systems: File SystemHomeLocal ScratchGlobal ScratchProjectEnvironmentVariableDefinition $HOME $SCRATCH $GSCRATCH None.  Must use/project/projectdirs/ Description Global home file system shared with other NERSC systems. All NERSC machines mount the same… Read More »

Software and Tools

The table below shows the software installed on Edison that is managed by modules. Read More »

Known issues

Intel libraries are missing PETSc and some other libraries, such as Trilinos, are not yet available with the Intel programming environment on Edison. (They are available for Cray and GNU programming environments.)  Cray is planning to make these available, although no timeframe yet. Error message: undefined reference to __pgas_register_dv When using Cray compilers you may see the error message: undefined reference to '__pgas_register_dv' . This issue is expected to be fixed in future… Read More »

Cray XC30 Press Release

Cray has announced the launch of the Company’s next generation high-end supercomputing systems – the Cray XC30 supercomputer. Read More »