...
The Engineering Research Compute Cluster (ENGR Cluster) is a medium-scale installation of heterogeneous compute nodes, supporting both common computational tasks, specific Engineering software applications, and specialized nodes for specific research group tasks.
Excluding faculty/lab owned equipment, the structure of the ENGR cluster is as below, broken down into resources by queue.
Compute nodes are organized into queues into which you submit jobs. Some special queues exist for particular labs or administrative purposes, and are not included here, nor are resources that are in generic queues but locked to specific lab populations.
See The LSF Scheduler for more information on managing jobs.
Queue | Purpose | Inventory | Notes |
---|---|---|---|
normal | VNC & Batch Jobs |
| The K40 and TITAN GPUs are recommended for VNC use only, and may not be supported by current computational libraries.
|
interactive | Interactive Jobs | See Above | |
cpu-compute-* | Batch Jobs |
|
Three queues are defined: cpu-compute : 7 day job time limit cpu-compute-long: 21 day job time limit cpu-compute-debug: 4 hour time limit, allows interactive jobs |
gpu-compute-* | Batch Jobs |
|
Three queues are defined: gpu-compute : 7 day job time limit gpu-compute-long: 21 day job time limit gpu-compute-debug: 4 hour time limit, allows interactive jobs |
linuxlab | Interactive & Batch Jobs |
|
|
Accessing the Research Compute Cluster
...
The above starts Jupyter in a specific directory. You must have a keytab established, as described above, for this to work on RIS storage locations.
Include Page | ||
---|---|---|
|
...
VSCode starts a Visual Studio Code interface in your browser
|
Batch Jobs
Batch jobs are the most efficient way to perform computations on the cluster - you can submit a job script file, which will then run on a compute node that meets your requirements. It runs unassisted, without needing monitoring.
Software on the Compute Cluster
...