Ansys/Fluent

Ansys/Fluent in the Academic Labs

Ansys/Fluent in RDSH

Ansys/Fluent in Apporto

Using Ansys/Fluent in OnDemand

Open an OnDemand Cluster Desktop from https://compute.engr.wustl.edu

Open a terminal, and execute the commands:

/project/research/ansys21/fluent21

Multi-node computing with the OnDemand GUI

You will want to have created an SSH key, and run /project/compute/bin/update_hostkeys.sh to pre-accept all cluster host keys.

Start a VNC job as normal. Once there, you can reserve original Infiniband nodes with:

/project/research/ansys/reserve_fluent.sh X

where X is the number of cores you wish to reserve.

You may reserve nodes within the CPU Compute queue with:

/project/research/ansys/reserve_fluent_cpu.sh X
/project/research/ansys/reserve_fluent_cpu_long.sh X

for either the 7 day or long 21 day queues. Keep in mind this reservation job will end independently of any other Ansys application utilizing it as a target for jobs.

The script will output information you need to continue:

Starting a 4 node Fluent Reservation
Reserving a 4 IB CPU Fluent Job
...starting job:

Your job is:
1234567 seasuser   PEND  ib1        ssh.engr.wustl.edu    -        Fluent-MPI-Waiter Nov  1 11:11

Please look for a file in the root of your home directory :
fluenthosts.1234567
(where 1234567 is the job number you are given above)
and pass that to Fluent as:
fluent -cnf=/home/research/username/fluenthosts.1234567
  (where username is your own WUSTL Key)

You must remember to kill this job when you are done with Fluent!
Use the command:
bkill 1234567
The nodes you reserve will not be released until you do.

Note the job number and the ā€œfluenthosts.1234567ā€ filename - your number will be different for your own started job.

Run Fluent in the VNC session as this example, with the above information, modifying for your own needs:

/project/research/ansys21/fluent21 3ddp -tX -cnf=/home/research/username/fluenthosts.1234567 -pib.ofed -gui_machine=$HOSTNAME -i test.jou

For the normal queue via Ethernet, change

-pib.ofed

to

-peth.ofed

The ANSYS GUI program can run in the queues dedicated to VNC jobs and control jobs within the cpu-compute-* queues without issue.Ā 

Remember to kill the MPI reservation job when done!


Using Ansys/Fluent in Batch Mode


You will want to have created an SSH key, and run /project/compute/bin/update_hostkeys.sh to pre-accept all cluster host keys.

A sample batch job for running Fluent is below, geared towards running an Infiniband-based MPI job on the cpu-compute queue. It expects to run from a current working directory of ā€œ/storage1/piname/Active/project/ā€, which should be modified everywhere in the script for your specific requirements.

It asks for 64 total CPUs, tiling them in groups of 8 across the available nodes, and 32GB of RAM per node.

#BSUB -R '(!gpu)'
#BSUB -n 64
#BSUB -o /storage1/piname/Active/project/ansys.out
#BSUB -J fluentJob
#BSUB -R "rusage[mem=32]"
#BSUB -R "span[ptile=8]"
#BSUB -q cpu-compute

export LSF_ENABLED=1
cd $LS_SUBCWD
FL_SCHEDULER_HOST_FILE=lsf.${LSB_JOBID}.hosts
/bin/rm -rf ${FL_SCHEDULER_HOST_FILE}
if [ -n "$LSB_MCPU_HOSTS" ]; then
    HOST=""
    COUNT=0
    for i in $LSB_MCPU_HOSTS
    do
      if [ -z "$HOST" ]; then
         HOST="$i"
      else
         echo "$HOST:$i" >> $FL_SCHEDULER_HOST_FILE
         COUNT=`expr $COUNT + $i`
         HOST=""
      fi
    done
fi

/project/research/ansys21/fluent21 2ddp -g -t64 -scheduler_tight_coupling -pib.ofed -i/storage1/piname/Active/project/testJournal.jou -pcheck -setenv=FLUENT_ARCH=lnamd64 -alnamd64 -env -setenv=LD_LIBRARY_PATH=/project/research/ansys21/gcc/lib2:/project/research/ansys21/gcc/lib2:/opt/ibm/lsfsuite/lsf/10.1/linux2.6-glibc2.3-x86_64/lib -setenv=FLUENT_ARCH=lnamd64 -cnf=${FL_SCHEDULER_HOST_FILE}

rm lsf.${LSB_JOBID}.hosts

---

The various BSUB parameters are explained earlier in this documentation.

The first part of this script aids in capturing the hosts the job is destined to run on, and is required.

The second covers starting Fluent in 2ddp mode; indicating the number of tasks, scheduler options, Ethernet MPI, input file, parallel check function, and then setting various environmental variables and architecture settings.

The last line cleans up the MPI hosts file.