The NERSC Cray XT4 system, named Franklin, has 9,660 dual core compute nodes,
i.e., a total of 19,320 processor cores available for scientific applications.
Each compute node has 2.6 GHz dual-core AMD Opteron processor and 4 GBytes
of memory.
The full system consists of 102 cabinets with 39 TBytes of aggregate memory.
The theoretical peak performance of Franklin is about 101.5 TFlop/sec.
The system is named in honor of Benjamin Franklin.
The NERSC IBM p575 POWER 5 system, named Bassi, has
a total of 976 POWER5+ CPUs with a peak performance of 7.4 TFlop/s.
888 processors are available to run scientific computing applications.
They are configured as
111 8-CPU nodes, each node having 32 GB of memory.
The machine is named in honor of
Laura Bassi, a noted Newtonian physicist of the eighteenth century.
Jacquard is an Opteron cluster with 356 dual-processor nodes
available to run scientific applications. The nodes have 6 GB of memory each,
and are
connected by a high-speed InfiniBand network.
The cluster is named in honor of inventor Joseph Marie Jacquard, whose loom was
the first machine to use punch cards to control a sequence of operations.
DaVinci is an SGI Altix 350 server with 32 Itanium-2 processors and 192 GB of
shared memory. DaVinci's main purpose is to provide visualization and data
analysis capabilities to the NERSC user community.
|
NERSC's research in data-intensive computing is grounded in our operation of
a 390-processor Linux cluster, the Parallel Distributed
Systems Facility. PDSF is used by
large-scale high energy and nuclear physics investigations for detector
simulation, data analysis, and software development.
The NERSC Global Filesystem (NGF) is a large, shared filesystem that can be
accessed from any of the compute platforms. This facilitates file sharing
between platforms, as well as file sharing among NERSC users working on a
common project. NGF is based on IBM's General Parallel File System (GPFS).
Archival mass storage is provided by the
High Performance Storage System.
This system has 100 TB of cache disk, 8 STK robots, and 44,000 tape slots for a
maximum capacity of about 44 PB.
HPSS archives 2.6 petabytes (PB) of data in 53 million files and sustains an
average transfer rate of more than 100 MB/s, 24 hours per day, with peaks to
450 MB/s.
NERSC offers end-to-end data transfer optimization and other
network services.
Access to NERSC from anywhere in the U.S. or the world is
available through ESnet.
Additional capabilities are provided by special-purpose
servers available to projects computing at NERSC.
|