NERSCPowering Scientific Discovery Since 1974

Configuration

Overview

Carver, a liquid-cooled IBM iDataPlex system, has 1202 compute nodes (9,984 processor cores).  This represents a theoretical peak performance of 106.5 Teraflops/sec.

Note that the above node count includes hardware that is dedicated to various strategic projects and experimental testbeds (e.g., Hadoop).  As such, not all 1202 nodes will be available to all users at all times.

All nodes are interconnected by 4X QDR InfiniBand technology, providing 32 Gb/s of point-to-point bandwidth for high-performance message passing and I/O.

Compute Nodes

1,120 nodes each have two quad-core Intel Xeon X5550 ("Nehalem") 2.67 GHz processors (eight cores/node); 80 nodes each have two six-core Intel Xeon X5650 ("Westmere") 2.67 GHz processors (12 cores/node).  960 of the Nehalem nodes have 24 GB of DDR3 1333 MHz memory per "smallmem" node; the remaining 160 Nehalem "bigmem" nodes have 48 GB of DDR3 1066 MHz memory.  The Westmere nodes have 48 GB of DDR3 1333 MHz memory per node; these nodes are dedicated to servicing Carver's serial workload.

In addition to the above compute nodes, there are two nodes that each have four eight-core Intel Xeon X7550 ("Nehalem-EX") 2.00 GHz processors (32 cores total) and 1 TB of DDR3 1066 MHz memory.

Note that at any given time, some of these compute nodes may be dedicated to specific projects; rarely are all 1202 compute nodes available via the standard batch queues.  In particular, 320 nodes are targeted to support the NISE program, 160 nodes (including the Hadoop testbed) are dedicated to on-going support of computational cloud technology, and 80 nodes are dedicated to running serial jobs.  On average, there should be be about 800 eight-core Nehalem nodes available for running parallel applications.  Note that the largest allowed parallel job is 64 nodes (512 cores).

All compute nodes are "diskless".  This implies (among other things) that the root file system (/bin, /tmp, etc.) is always resident in RAM memory.  On most nodes, the root file system and the memory-resident Linux kernel use about 4 GB of physical memory.  For a discussion of how to use memory effectively on Carver, please see Carver Memory Considerations.

The following table summarizes the chararcteristics of Carver's compute nodes:

Type of Node Number Cores/Node Mem/Node Mem/Core  
Nehalem 2.67GHz "smallmem" 960 8 24GB 1333MHz 3 GB  
Nehalem 2.67GHz "bigmem" 160 8 48GB 1066MHz 6 GB  
Westmere 2.67GHz 80 12 48GB 1333MHz 4 GB  
Nehalem-EX 2.00GHz 2 32 1TB 1066MHz 32 GB  

Login Nodes

The four login nodes (IBM System x3650 M2), each having two quad-core Intel Xeon X5550 2.67 GHz processors, for a total of eight cores per node and 32 cores total.  Each node has 48 GB of DDR3 1066 MHz memory.

Appropriate Use of Login Nodes

Login nodes should typically be used for the following purposes:

  • Code development (editing, compiling/linking, and "unit" debugging)
  • Submitting and monitoring batch jobs
  • File managment
  • Limited interactive post-processing of batch data
Process Limits

Carver's login nodes are shared among hundreds of active NERSC users.  The following process limits are in place to maintain equitable access and good interactivity:

sh/bash Name
csh/tcsh Name
Soft Limit
Hard Limit
cpu time (-t) cputime 60 minutes 60 minutes
data seg size (-d) datasize 1 GB 2 GB
stack size (-s) stacksize 128 MB 256 MB
max memory size (-m) memoryuse 1 GB 2 GB
virtual memory (-v) vmemoryuse 2 GB 2 GB
max user processes (-u) maxproc 256 256

Interconnect

All Carver nodes are interconnected by 4X QDR InfiniBand technology, meaning that 32 Gb/sec of point-to-point bandwidth is available for high-performance message passing and I/O.  The interconnect consists of fibre optic cables arranged as local fat-trees within a global 2D mesh.

Network and Service Nodes

The six service nodes are IBM System x3650 M2 nodes, each having two quad-core Intel Xeon X5550 2.67 GHz processors, for a total of eight cores per node and 48 cores total.  Each node has 48 GB of DDR3 1066 MHz memory.

File Systems

Carver has 3 kinds of file systems available to users: global homes, global scratch, and global project.  All are provided by the NERSC Global Filesystem.  See NERSC File Systems.