Skip all navigation and jump to content Jump to site navigation Jump to section navigation.
NASA Logo - Goddard Space Flight Center + Visit NASA.gov
NASA Center for Computational Sciences
NCCS HOME USER SERVICES SYSTEMS DOCUMENTATION NEWS GET MORE HELP

 

Documentation
OVERVIEW
GENERAL SOFTWARE INFO
HALEM
DALEY AND COURANT
PALM/EXPLORE
DIRAC/JIMPF

More palm/explore links:

+ Quick Start Guide

+ Overview of System Resources

+ Filesystem Access and Policies

+ Programming Environment

+ Batch Queues

+ Software and Tools

Batch Queues on the SGI Altix 3000

Read the documentation below the table to familiarize yourself with the restrictions on some of the queues below. All queues except debug and pproc will run on e1, e2, and e3. The default number of CPUs for any job is 2, and the default wall clock time is 5 minutes.

Special Queues on the SGI Altix 3000 System

  Up to 1 Hours Up to 3 Hours From 1 to 12 Hours Longer than 12 Hours
Up to 16 processors
  • datamove max 2 proc/job and 2 hour maximum
 
Up to 32 processors      
From 2 to 254 processors      
More than 254 processors      

Numbers in parentheses designate priority, with 7 being the highest priority and 1 being the lowest.

debug

  • Routed specifically to the system’s front end, palm
  • Time constraint of 1 hour maximum per job
  • No more than 4 jobs in this queue can be run by the same user at the same time.

datamove

  • This queue is to be used for the purpose of moving data (data archival jobs, staging jobs, no MPI/OpenMP processing).
  • Only 2 jobs allowed per user at one time to run.
  • Job size is limited to 2 processors.
  • There are only 10 processors in total set aside for this queue.
  • Jobs in this queue will run on the backend systems (not palm, but rather e1, e2 or e3).
  • If users want to use this queue they need to specify the queue name in their qsub parameters "-q datamove" on the command line or "#PBS -q datamove" in the job script itself.

pproc

  • This queue is to be used for pre/post processing work (not production runs).
  • Jobs now limited to no more than 16 processors
  • If users want to use this queue they need to specify the queue name in their qsub parameters "-q pproc" on the command line or "#PBS -q pproc" in the job script itself.
  • No more than 6 jobs in this queue can be run by the same user at the same time.

general_small

  • No more than 5 jobs in this queue can be run by the same user at the same time.
  • Job size is limited to 18 processors.
  • Minimum wallclock time has been lowered to 1 hour by default (so the queue allows 1-12 hours walltime).

general

  • No more than 8 jobs in this queue can be run by the same user at the same time.
  • Allows jobs sized between 16 processors and 254 processors.

background

  • Designed to have the lowest priority on the system and is targeted for two groups:
    • NCCS staff
    • Users who have used up all their allocated hours on the system
    • This queue is only turned on if there isn't any work waiting in the high_priority, general_hi or general queues.
  • Note that users may still run jobs in the background queue even if they have a current allocation; use of this queue will not count against that allocation amount.

general_hi

  • Designated for users who have a demonstrated need for either very large or very long jobs
  • Must be approved by NCCS staff, the NCCS Director, or NASA HQ
  • Total number of processors available (and maximum number of processors per job) in this queue is 510

high_priority

  • Reserved for use on jobs designated by NCCS staff, the NCCS Director, or NASA Headquarters as needing priority within the overall workload
  • Total number of processors available (and maximum number of processors per job) in this queue is 510

 


FirstGov logo + Privacy Policy and Important Notices
+ Sciences and Exploration Directorate
+ CISTO
NASA Curator: Mason Chang,
NCCS User Services Group (301-286-9120)
NASA Official: Phil Webster, High-Performance
Computing Lead, GSFC Code 606.2