Office of Science
FAQ
Capabilities

High-Performance Computing Center

EMSL provides a combination of production computing hardware and software resources to support the scientific research activities of EMSL user projects, including EMSL Science Theme projects and Computationally Intensive Research (formerly known as Computational Grand Challenge) projects.

The flagship of EMSL's hardware collection is the Hewlett-Packard (HP) Linux-based supercomputer, Chinook. Chinook is connected to EMSL's data storage system, NWfs, as well as to the Graphics and Visualization Laboratory via a high-speed network, allowing all the systems to work together on large-scale scientific applications.

In addition to these resources, EMSL offers SGI, Sun, and Linux workstations. Common workstations and peripherals include:

Many of these resources are available for general use after users submit a proposal. Other computer resources are available only for software development or testing.

About Chinook

The High Performance Computing System-3 (Chinook) is a balanced supercomputer that has been tailored to meet the current and future operational needs of EMSL users. The first phase of the HP supercomputer has just been delivered and undergoing acceptance testing. The second phase is to be delivered in the July-September-08 time frame. The numbers below represent the sizes of Phase 1. (When followed by a number in parentheses, the number is for Phase 2). Chinook has a peak performance of 12.48(160) Teraflops. It currently has 9.6(36.9) terabytes of RAM, 219(840) terabytes of local disk, and 250 terabytes of a global shared file system. The system is connected via multiple 10-Gigabit Ethernet connections, allowing EMSL users to perform remote visualizations and transfer data to remote storage.

See the Chinook Overview table below for more information about Chinook, or get more Chinook Details, including system information, compilers, parallel coding, how to submit jobs, and who to contact. An Chinook Quick Reference Card is also available [pdf, 143kb].

Chinook Overview
System Name Chinook
Purpose Production
Platform HP/Linux
Nodes 600(2310)
Node Configuration
  • Compute nodes have 16 Gbyte RAM and 365 Gbyte local disk
  • 38 Lustre server nodes (36 OSS, 2 MDS)
  • 2 administrative nodes
  • 5 login nodes
Processors Dual Socket Dual Core Opteron 2.6GHz Santa Rosa chips (Dual Socket Quad Core Opteron 2.2GHz Barcelona chips)
Operating System Red Hat Linux
Cluster Management SLURM
File System NFS (/msrc, /home), Lustre* (/dtemp) and local (/scratch)
Compilers Intel, Pathscale and Gnu
Batch Scheduler MOAB

* a cross mounted parallel file system
** developers note that software and shared libraries on different computer systems are not necessarily compatible; some programs will have problems running on a system other than where they were compiled

About NWfs

During the course of their research, EMSL users and scientists at Pacific Northwest National Laboratory (PNNL) generate massive amounts of experimental and simulation data. To preserve and protect this valuable and often non-reproducible information, the NWfs system was established.

The NWfs archive uses a unique approach to disk storage by clustering many low-cost commodity disks to provide fault tolerant high-performance storage that appears as a single archive. The archive uses unique software developed by MSCF staff to manage the large pools of clusters disks. Currently, the archive has 750 terabytes of storage and the ability to grow to over a petabyte.

NWfs is made freely available to users of the EMSL.

Learn more about NWfs Policies, NWfs Access, and NWfs Status.

Computing Capability Steward (High-Performance Computing Center):Kevin Regimbal | , 509-371-6075