Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The Engineering Research Compute Cluster (ENGR Cluster) is a medium-scale installation of heterogeneous compute nodes, supporting both common computational tasks, specific Engineering software applications, and specialized nodes for specific research group tasks.

Excluding faculty/lab owned equipment, the structure of the ENGR cluster is as below, broken down into resources by queue.

Compute nodes are organized into queues into which you submit jobs. Some special queues exist for particular labs or administrative purposes, and are not included here, nor are resources that are in generic queues but locked to specific lab populations.

QueuePurposeInventoryNotes
normalVNC & Batch Jobs
  • 480 CPU Cores, avg. 8GB RAM per core
  • 8 NVIDIAGeForceGTX1080Ti
  • 4 NVIDIAGeForceRTX2080
  • 4 NVIDIAGeForceGTXTITAN
  • 8 TeslaK40m
The K40 and TITAN GPUs are recommended for VNC use only, and may not be supported by current computational libraries.
interactiveInteractive JobsSee Above
cpu-compute-*Batch Jobs
  • 512 CPU Cores
  • 100GB IB Interconnectivity

Three queues are defined:

cpu-compute : 7 day job time limit

cpu-compute-long: 21 day job time limit

cpu-compute-debug: 4 hour time limit, allows interactive jobs

gpu-compute-*Batch Jobs
  • 8 NVIDIAA100 80GB PCIe (1 System)
  • 16 NVIDIAA40 48GB (2 Systems)
  • 768GB RAM Per System
  • 100GB IB Interconnectivity

Three queues are defined:

gpu-compute : 7 day job time limit

gpu-compute-long: 21 day job time limit

gpu-compute-debug: 4 hour time limit, allows interactive jobs

linuxlabInteractive & Batch Jobs
  • 224 CPU Cores, avg 4GB RAM per core

Accessing the Research Compute Cluster

...