hpss

Since 2/12/13 01:55 pm

lens

Since 2/13/13 10:20 am

smoky

Since 2/13/13 08:05 am
OLCF User Assistance Center

Can't find the information you need below? Need advice from a real person? We're here to help.

OLCF support consultants are available to respond to your emails and phone calls from 9:00 a.m. to 5:00 p.m. EST, Monday through Friday, exclusive of holidays. Emails received outside of regular support hours will be addressed the next business day.

Batch Queues on Lens

Bookmark and Share
See this article in context within the following user guides: Lens

Batch queues are used by the cluster’s batch scheduler to aid in the organization of jobs. There are (2) major types of queues on Lens: computation queues and analysis queues.

Computation queues are the standard queues for production-type work. All users have access to the computation queues by default.

Analysis queues are specifically for analysis or visualization of data generated on flagship OLCF systems (e.g. Titan). Users must request access to the analysis queues by contacting the OLCF User Assistance Center.

In addition to (2) types of queues, Lens also contains (2) types of nodes. (32) GPU nodes contain (4) quad-core 2.3 GHz AMD Opteron processors, (64) GB of memory, and an Nvidia Tesla C1060 GPU, while (45) high-memory nodes contain (4) quad-core 2.3 GHz AMD Opteron processors, (128) GB of memory and no GPU.

The table below summarizes the queues available on Lens:

Queue Name Queue Type Max. Walltime Available Node Types Max. Jobs Preemption Policy
Running Total
comp computation 06:00:00 All (77) nodes (1) (2) Can be preempted by jobs in analysis queues
comp_gpu GPU nodes only
comp_mem High-memory nodes only
vis analysis 24:00:00 All (77) nodes N/A N/A Can preempt jobs in computation queues
vis_gpu GPU nodes only
vis_mem High-memory nodes only
Note: Jobs in the computation queues can be preempted by jobs in analysis queues. If a pending job in one of the analysis queues requires resources currently in use by a computation queue job, the computation job will be killed and the owner of the killed job will be sent an email stating the job has been killed due to preemption.

If your jobs require resources outside of these limits, please complete the relevant request form on the Special Requests page.

Users may have a maximum of (1) running job in each of the computation queues, with a maximum of (2) jobs (in any state) in each of them.

Any job can request any number of processors (up to the maximum number physically available for the queue being used).