Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Compute nodes are organized into queues into which you submit jobs. Some special queues exist for particular labs or administrative purposes, and are not included here, nor are resources that are in generic queues but locked to specific lab populations.

See The LSF Scheduler for more information on managing jobs.

QueuePurposeInventoryNotes
normalVNC & Batch Jobs
  • 480 CPU Cores, avg. 8GB RAM per core
    • 4 Nodes : 36 Cores
    • 10 Nodes: 24 Cores
    • 6 Nodes: 16 Cores
    • 1 Nodes: 12 Cores
  • 8 NVIDIAGeForceGTX1080Ti (1 Node)
  • 4 NVIDIAGeForceRTX2080 (1 Node)

  • 4 NVIDIAGeForceGTXTITAN
  • 8 TeslaK40m
    (above GPus split over 3 nodes)
The K40 and TITAN GPUs are recommended for VNC use only, and may not be supported by current computational libraries.
interactiveInteractive JobsSee Above
cpu-compute-*Batch Jobs
  • 512 CPU Cores
    • 4 Nodes: 128 Cores, 4GB RAM/core
  • 100GB IB Interconnectivity

Three queues are defined:

cpu-compute : 7 day job time limit

cpu-compute-long: 21 day job time limit

cpu-compute-debug: 4 hour time limit, allows interactive jobs

gpu-compute-*Batch Jobs
  • 8 NVIDIAA100 80GB PCIe (1 Node)
  • 16 NVIDIAA40 48GB (2 Nodes)
  • 768GB RAM Per Node
  • 100GB IB Interconnectivity

Three queues are defined:

gpu-compute : 7 day job time limit

gpu-compute-long: 21 day job time limit

gpu-compute-debug: 4 hour time limit, allows interactive jobs

linuxlabInteractive & Batch Jobs
  • 224 CPU Cores, avg 4GB RAM per core

...