NERSCPowering Scientific Discovery Since 1974

Optimizing I/O performance on the Lustre file system

Lustre File Striping

Hopper uses Lustre as their $SCRATCH file systems. For many applications a technique called file striping will increase IO performance. File striping will primarily improve performance for codes doing serial IO from a single node or parallel IO from multiple nodes writing to a single shared file as with MPI-IO, parallel HDF5 or parallel NetCDF.

The Lustre file system is made up of an underlying set of IO servers and disks called Object Storage Servers (OSSs) and Object Storage Targets (OSTs) respectively. A file is said to be striped when read and write operations access multiple OST's concurrently. File striping is a way to increase IO performance since writing or reading from multiple OST's simultaneously increases the available IO bandwidth.

NERSC Striping Shortcuts

  • The default striping is 2 on $SCRATCH and $SCRATCH2
  • This means that each file created with the default striping is split across 2 OSTs (disks).
  • Selecting the best striping can be complicated since striping a file over too few OSTs will not take advantage of the systems available bandwidth but striping over too many will cause unnecessary overhead and lead to a loss in performance.
  • NERSC has provided striping command shortcuts based on output file size to simplify optimization.
  • Users who want more detail should read the sections below or contact the consultants at consult@nersc.gov.
Striping Shortcut Commands
Single File I/OFile-per-Processor I/O
Description Either one processor does all the I/O for a simulation in serial or multiple processors write to a single shared file as with MPI-IO, HDF5 or PnetCDF Each processor writes to its own file resulting in as many files as number of processors used (for a given output)
Size of FileCommand
< 1GB Do Nothing. Use default striping. stripe_fpp or keep default striping
~1GB - ~10GB stripe_small stripe_fpp or keep default striping
~10GB - ~100GB stripe_medium stripe_fpp or keep default striping
~100GB - 1TB+ stripe_large Ask consultants

Striping must be set on a directory before files in it are created. Once a file has been created with a certain striping the only way to change it is to copy the file to a directory with a different striping.

% stripe_medium [mydirectory]

Or put the striping command directly into the job script.

#PBS -q debug
#PBS -l mppwidth=8
#PBS -l walltime=00:10:00
#PBS -j eo
#PBS -V

cd $PBS_O_WORKDIR

stripe_medium myOutputDir

aprun -n 8 ./a.out

 

More Details on File Striping

To set striping for a file or directory use the command

% lfs setstripe

Each file and directory can have a separate striping pattern and a directory's striping setting can be overwritten for a particular file by issuing the lfs setstripe command for individual files within that directory. However, striping settings for a file must be set before it is created. If the settings for an existing file are changed, it will only get the new striping setting if the file is recreated. If the settings for an existing directory are changed, the files need to be copied elsewhere and then copied back to the directory in order to inherit the new settings. The lfs setstripe syntax is:

% lfs setstripe --size [stripe-size] --index [OST-start-index] --count [stripe-count] filename

lfs setstripe arguments
OptionDescriptionDefault
stripe-size Number of bytes write on one OST before cycling to the next. Use multiples of 1MB. Default has been most successful. 1MB
OST-start-index Starting OST. Default highly recommended -1 (System follows a round robin procedure to optimize creation of files by all users.)
stripe-count Number of OSTs (disks) a file exists on [1-48] 2

For large shared files, set the stripe-count to something large,  up to 156 on Hopper to get maximum bandwidth. (stripe_large uses the below options.)

% lfs setstripe myfile  --count 80

For applications where each processor writes its own file, set the stripe-count to 1. (This is the setting for stripe_fpp)

% lfs setstripe myfile  --count 1

An application's best performance may likely fall between these two extreme examples.

The lfs getstripe command will give the striping pattern for a file or directory.

% lfs getstripe testdir


testdir
stripe_count: 11 stripe_size: 0 stripe_offset: -1

 

MPI-IO Collective Mode

Collective mode refers to a set of optimizations available in many implementations of MPI-IO that improve the performance of large-scale IO to shared files. To enable these optimizations, you must use the collective calls in the MPI-IO library that end in _all, for instance MPI_File_write_at_all(). Also, all MPI tasks in the given MPI communicator must participate in the collective call, even if they are not performing any IO operations. The MPI-IO library has a heuristic to determine whether to enable collective buffering, the primary optimization used in collective mode.

Collective buffering, also called two-phase IO, breaks the IO operation into two stages. For a collective read, the first stage uses a subset of MPI tasks (called aggregators) to communicate with the IO servers (OSTs in Lustre) and read a large chunk of data into a temporary buffer. In the second stage, the aggregators ship the data from the buffer to its destination among the remaining MPI tasks using point-to-point MPI calls. A collective write does the reverse, aggregating the data through MPI into buffers on the aggregator nodes, then writing from the aggregator nodes to the IO servers. The advantage of collective buffering is that fewer nodes are communicating with the IO servers, which reduces contention. In fact, Lustre prefers a one-to-one mapping of aggregator nodes to OSTs.

Since the release of mpt/3.3, Cray has included a Lustre-aware implementation of the MPI-IO collective buffering algorithm. This implementation is able to buffer data on the aggregator nodes into stripe-sized chunks so that all read and writes to the Lustre filesystem are automatically stripe aligned without requiring any padding or manual alignment from the developer. Because of the way Lustre is designed, alignment is a key factor in achieving optimal performance.

Several environment variables can be used to control the behavior of collective buffering on Franklin. The MPICH_MPIIO_HINTS variable specifies hints to the MPI-IO library that can, for instance, override the built-in heuristic and force collective buffering on:

% setenv MPICH_MPIIO_HINTS="*:romio_cb_write=enable:romio_ds_write=disable"

Placing this command in your batch file before calling aprun will cause your program to use these hints. The * indicates that the hint applies to any file opened by MPI-IO, while romio_cb_write controls collective buffering for writes and romio_ds_write controls data sieving for writes, an older collective mode optimization that is no longer used and can interfere with collective buffering. The options for these hints are enabled, disabled, or automatic (the default value, which uses the built-in heuristic).

It is also possible to control the number of aggregator nodes using the cb_nodes hint, although the MPI-IO library will automatically set this to the stripe count of your file.

When set to 1, the MPICH_MPIIO_HINTS_DISPLAY variable causes your program to dump a summary of the current MPI-IO hints to stderr each time a file is opened. This is useful for debugging and as a sanity check againt spelling errors in your hints.

The MPICH_MPIIO_XSTATS variable enables profiling of the collective buffering algorithm, including binning of read/write calls and timings for the two phases. Setting it to 1 provides summary data, while setting it to 2 or 3 provides more detail.

More detail on MPICH runtime environment variables, including a full list and description of MPI-IO hints, is available from the intro_mpi man page on Franklin.