Skip to content

Welcome to NERSC

Welcome to the National Energy Research Scientific Computing Center (NERSC)!

About this page

This document will guide you through the basics of using NERSC's supercomputers, storage systems, and services.

What is NERSC?

NERSC provides High Performance Computing and Storage facilities and support for research sponsored by, and of interest to, the U.S. Department of Energy (DOES) Office of Science (SC). NERSC has the unique programmatic role of supporting all six Office of Science program offices: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, and Nuclear Physics.

Scientists who have been awarded research funding by any of the offices are eligible to apply for an allocation of NERSC time. Additional awards may be given to non-DOE funded project teams whose research is aligned with the Office of Science's mission. Allocations of time and storage are made by DOE.

NERSC is a national center, organizationally part of Lawrence Berkeley National Laboratory in Berkeley, CA. NERSC staff and facilities are primarily located at Berkeley Lab's Shyh Wang Hall on the Berkeley Lab campus.

Computing & Storage Resources

Cori

Cori is a Cray XC40 supercomputer with approximately 12000 compute nodes.

Community File System (CFS)

The Community File System (CFS) is a global file system available on all NERSC computational systems. It allows sharing of data between users, systems, and the "outside world".

HPSS (High Performance Storage System) Archival Storage

The High Performance Storage System (HPSS) is a modern, flexible, performance-oriented mass storage system. It has been used at NERSC for archival storage since 1998. HPSS is intended for long term storage of data that is not frequently accessed.

NERSC Accounts

In order to use the NERSC facilities, you need:

  1. Access to an allocation of computational or storage resources as a member of a project
  2. A user account with an associated user login name (also called username).

  3. Obtaining an account

  4. NERSC Allocations
  5. Iris: Account and allocation management web interface
  6. Password rules

With Iris you can

  • check allocation balances
  • change passwords
  • run reports
  • update contact information
  • clear login failures
  • change login shell
  • and more!

Connecting to NERSC

MFA is required for NERSC users

NERSC Users Group (NUG)

Join the NERSC Users Group: an independent organization of users of NERSC resources.

Tip

NUG maintains a Slack workspace that all users are welcome to join.

Software

NERSC and its vendors supply a rich set of HPC utilities, applications, and programming libraries.

Something missing?

If there is something missing that you would like to have on our systems, please submit a request and we will evaluate it for appropriateness, cost, effort, and benefit to the community.

Computing Environment

Info

$HOME directories are shared across all NERSC systems (except HPSS)

Compiling/ building software

Running Jobs

Typical usage of the system involves submitting scripts (also referred to as "jobs") to a batch system such as Slurm.

Interactive Computing

NERSC also supports interactive computing.

Data Sharing

Security and Data Integrity

Sharing data with other users must be done carefully. Permissions should be set to the minimum necessary to achieve the desired access. For instance, consider carefully whether it's really necessary before sharing write permissions on data. Be sure to have archived backups of any critical shared data. It is also important to ensure that private login secrets (like SSH private keys or apache htaccess files) are not shared with other users (either intentionally or accidentally). Good practice is to keep things like this in a separate directory that is as locked down as possible.

Sharing with Other Members of Your Project

NERSC's Community file system is set up with group read and write permissions and is ideal for sharing with other members of your project. There is a directory for every active project at NERSC and all members of that project should have access to it by default.

Sharing with NERSC Users Outside of Your Project

You can share files and directories with NERSC users outside of your project by adjusting the unix file permissions. We have an extensive write up of unix file permissions and how they work here.

NERSC provides two commands: give and take which are useful for sharing small amounts of data between users.

To send a file or path to <receiving_username>:

nersc$ give -u <receiving_username> <file or directory>

To receive a file sent by <sending_username>:

nersc$ take -u <sending_username> <filename>

To take all files from <sending_username>:

nersc$ take -a -u <sending_username>

To see what files <sending_username> has sent to you:

nersc$ take -u <sending_username>

For a full list of options pass the --help flag.

Warning

Files that remain untaken 12 weeks after being given will be purged from the staging area.

Sharing Data outside of NERSC

You can easily and quickly share data over the web using our Science Gateways framework.

You can also share large volumes of data externally by setting up a Globus Sharing Endpoint.

Data Transfers

NERSC partners with ESNet to provide a high speed connection to the outside world. NERSC also provides several tools and systems optimized for data transfer.

External Data Transfer

Tip

NERSC recommends transferring data to and from NERSC using Globus

Globus is a web-based service that solves many of the challenges encountered moving data between systems. Globus provides the most comprehensive, efficient, and easy to use service for most NERSC users.

However, there are other tools available to transfer data between NERSC and other sites:

  • scp: standard Linux utilities suitable for smaller files (<1GB)
  • GridFTP: parallel transfer software for large files

Transferring Data Within NERSC

Tip

"Do you need to transfer at all?" If your data is on NERSC Global File Systems (/global/cfs, /global/projecta, /global/cscratch), data transfer may not be necessary because these file systems are mounted on almost all NERSC systems. However, if you are doing a lot of IO with these files, you will benefit from staging them on the most performant file system. Usually that's the local scratch file system or the Burst Buffer.

  • Use the the unix command cp, tar or rsync to copy files within the same computational system. For large amounts of data use Globus to leverage the automatic retry functionality

Data Transfer Nodes

The Data Transfer Nodes (DTNs) are servers dedicated for data transfer based upon the ESnet Science DMZ model. DTNs are tuned to transfer data efficiently, optimized for bandwidth and have direct access to most of the NERSC file systems. These transfer nodes are configured within Globus as managed endpoints available to all NERSC users.

NERSC FTP Upload Service

NERSC maintains an FTP upload service designed for external collaborators to be able to send data to NERSC staff and users.

Getting Help

NERSC places a very strong emphasis on enabling science and providing user-oriented systems and services.

Documentation

NERSC maintains extensive documentation.

NERSC welcomes your contributions

These pages are hosted from a git repository and contributions are welcome!

Fork this repo

Account support

Availability

Account support is available 8-5 Pacific Time on business days.

Consulting

NERSC's consultants are HPC experts and can answer just about all of your technical questions.

Availability

Account support is available 8-5 Pacific Time on business days.

Operations

For critical system issues only.

  • 1-800-666-3772 (USA only) or 1-510-486-8600, Option 1