Apptainer Example: svFSIplus

svFSIplus is a C++ rework of svFSI. svFSI is a multi-physics finite element solver designed for computational modeling of the cardiovascular system.

Ā 

As with many software projects for research, svFSI is a moving target for system administrators - there is a base tension between the ability to provide stable environments against the needs of cutting-edge software. Containers help bridge that gap - allowing users to run software with particular requirements on almost any system.

The McKelvey Engineering cluster supports Singularity/Apptainer, and below is how a svFSIplus container was built for that cluster.

All of this was done on build.engr.wustl.edu, a node provided specifically for McKelvey users to create Apptainer containers. These instructions assume you are logged in either there or on another machine that you have sudo access with - a local Linux desktop or VM.

One of the easiest ways to find out how to build a particular piece of software in the current year is by finding if the project publishes a Dockerfile for creating Docker containers. Dockerfiles are convertible to Apptainer recipes, but instead, here, we go through the steps manually to produce our file. If we needed to rebuild this software or container often, then converting the Dockerfile would be warranted.

svFSIplusā€™s Dockerfile is found here:

https://github.com/SimVascular/svFSIplus/blob/main/Docker/ubuntu22/dockerfile#L4

This one is broken up into sections by each prerequisite in order of need. For this document, weā€™re going to gloss over much of the Dockerfile structure, especially the parts we donā€™t need for this particular exercise, and highlight the important parts by line number as shown on the Github page.

This document was written with the Dockerfile as of 7/26/24 in that repository.

Line 5: FROM ubuntu:22.04 AS buildcmake

This tells us our base image. On build.engr.wustl.edu, weā€™ll create an Apptainer sandbox with that same base:

sudo apptainer build --sandbox ubuntu22-svfsiplus docker://ubuntu:22.04

Once that processes, we enter the sandbox in writable mode:

sudo apptainer shell --writable ubuntu22-svfsiplus

Now weā€™re ready to start prepping the base of the container. First, we look through the Dockerfile for (since this is Ubuntu) lines about ā€œapt-get updateā€ and ā€œapt-get installā€. We find those on Line 17. and it so happens they are the same for each software section. Weā€™ll go ahead and run those in the container:

apt-get update apt-get install apt-get install build-essential wget git git-lfs python3 gfortran default-jdk default-jre libglu1-mesa-dev freeglut3-dev mesa-common-dev openssl libssl-dev zlib1g-dev libicu-dev python-is-python3

Weā€™ve added a package above - ā€œpython-is-python3ā€. Ubuntu doesnā€™t set a default ā€œ/usr/bin/pythonā€ unless you tell it to, and future steps expect that to be in place. We fix that with this package. ā€œpython-is-python2ā€ also exists, if the default must be python2.

We also add git-lfs, since svFSIplus will have need of it when we check it out later - that way we get the example/test files.

Before we continue, we need to create a build area. Weā€™re going to do that for this exercise in the container itself. That is where we will download and compile the software. When weā€™re done, weā€™ll move it out of the container tree to save space.

mkdir /usr/local/sv

We are doing this in /usr/local as itā€™s going to be in regular path searches for binaries and libraries going forward, so we donā€™t have to be concerned here with setting a full environment when running the container.

Ā 

cmake

We find the instructions for cmake starting on Line 29. There is a ${CMAKE_VERSION} variable there, which will happen with many of these pre-reqs. We can find that defined on Line 11. Weā€™ll put that together and download and unpack the software:

cd /usr/local/sv mkdir cmake cd cmake wget https://github.com/Kitware/CMake/releases/download/v3.29.0/cmake-3.29.0.tar.gz tar zxvpf cmake-3.29.0.tar.gz cd cmake-3.29.0

The ā€œRUNā€ lines of Dockerfiles are things to execute. The WORKDIR lines are how Dockerfiles change directories. Through this example, the Dockerfile is looking to build/install things to ā€œ/programā€. Weā€™re not doing that, since we arenā€™t gluing these Dockerfiles together at the end.

Weā€™re going to modify the commands to make and install with the assumption we are working underneath /usr/local/sv, We will spell out complete directory names where appropriate for ease of understanding.

Weā€™re also ignoring cleaning up after ourselves for each step, as weā€™ll do it at the end.

ā€œmake -j 6ā€ tells the system to use 6 threads to compile, gets it going a bit quicker. Donā€™t use too much more than that on build.engr.wustl.edu so you donā€™t overly slow down other users.

To build cmake:

./configure --prefix=/usr/local make -j6 make install

openmpi

Line 73

cd /usr/local/sv mkdir openmpi cd openmpi wget https://download.open-mpi.org/release/open-mpi/v5.0/openmpi-5.0.2.tar.gz tar zxvpf openmpi-5.0.2.tar.gz cd openmpi-5.0.2 ./configure --prefix=/usr/local make -j6 all make install

VTK

Line 119

cd /usr/local/sv mkdir vtk cd vtk wget https://www.vtk.org/files/release/9.3/VTK-9.3.0.tar.gz tar zxvpf VTK-9.3.0.tar.gz mkdir build cd build cmake -DBUILD_SHARED_LIBS:BOOL=OFF -DCMAKE_BUILD_TYPE:STRING=RELEASE -DBUILD_EXAMPLES=OFF -DBUILD_TESTING=OFF -DVTK_USE_SYSTEM_EXPAT:BOOL=ON -DVTK_USE_SYSTEM_ZLIB:BOOL=ON -DVTK_LEGACY_REMOVE=ON -DVTK_Group_Rendering=OFF -DVTK_Group_StandAlone=OFF -DVTK_RENDERING_BACKEND=None -DVTK_WRAP_PYTHON=OFF -DModule_vtkChartsCore=ON -DModule_vtkCommonCore=ON -DModule_vtkCommonDataModel=ON -DModule_vtkCommonExecutionModel=ON -DModule_vtkFiltersCore=ON -DModule_vtkFiltersFlowPaths=ON -DModule_vtkFiltersModeling=ON -DModule_vtkIOLegacy=ON -DModule_vtkIOXML=ON -DVTK_GROUP_ENABLE_Views=NO -DVTK_GROUP_ENABLE_Web=NO -DVTK_GROUP_ENABLE_Imaging=NO -DVTK_GROUP_ENABLE_Qt=DONT_WANT -DVTK_GROUP_ENABLE_Rendering=DONT_WANT -DCMAKE_INSTALL_PREFIX=/usr/local /usr/local/sv/vtk/VTK-9.3.0 cmake --build . --parallel 4 make install

cmake differs from make in that it usually wants you to create a different build directory to compile in, rather than just in the unpacked source tree.

Boost

Line 190

cd /usr/local/sv mkdir boost cd boost wget https://boostorg.jfrog.io/artifactory/main/release/1.84.0/source/boost_1_84_0.tar.gz tar zxvpf boost_1_84_0.tar.gz cd boost_1_84_0 ./bootstrap.sh --prefix=/usr/local/ ./b2 install

Lapack

Line 233

cd /usr/local/sv mkdir lapack cd lapack git clone https://github.com/Reference-LAPACK/lapack.git mkdir build cd build cmake -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_LIBDIR=/usr/local/lib /usr/local/sv/lapack/lapack cmake --build . -j6 --target install

BLAS is commented out in the Dockerfile, so we skip it. Weā€™re using the BLAS library from LAPACK.

HDF5

Line 330

cd /usr/local/sv mkdir hdf5 cd hdf5 git clone https://github.com/HDFGroup/hdf5.git mkdir build cd build cmake -C /usr/local/sv/hdf5/hdf5/config/cmake/cacheinit.cmake -G "Unix Makefiles" -DHDF5_ENABLE_NONSTANDARD_FEATURE_FLOAT16:BOOL=OFF -DHDF5_BUILD_JAVA:BOOL=OFF -DHDF5_ENABLE_PARALLEL:BOOL=ON -DALLOW_UNSUPPORTED:BOOL=ON -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local ../hdf5 cmake --build . make install

Hypre

Line 387

cd /usr/local/sv mkdir hypre cd hypre git clone https://github.com/hypre-space/hypre.git cd hypre/src ./configure --prefix=/usr/local make install

Trilinos

Line 436

svFSIplus, compiled last, isnā€™t actually set to use Trilinos by default. Weā€™re including this here for completionā€™s sake in case itā€™s needed in the future.

This is also where we found the reference to needing to link python to python3, on line 433.

cd /usr/local/sv mkdir trilinos cd trilinos git clone https://github.com/trilinos/Trilinos.git mkdir build cd build cmake -DCMAKE_INSTALL_PREFIX=/usr/local -DTPL_ENABLE_MPI=ON -DTPL_ENABLE_Boost=ON -DBoost_LIBRARY_DIRS=/usr/local/lib -DBoost_INCLUDE_DIRS=/usr/local/include -DTPL_ENABLE_BLAS=ON -DBLAS_LIBRARY_DIRS=/usr/local/lib -DTPL_ENABLE_HDF5=ON -DHDF5_LIBRARY_DIRS=/usr/local/lib -DHDF5_INCLUDE_DIRS=/usr/local/include -DTPL_ENABLE_HYPRE=ON -DHYPRE_LIBRARY_DIRS=/usr/local/lib -DHYPRE_INCLUDE_DIRS=/usr/local/include -DTPL_ENABLE_LAPACK=ON -DLAPACK_LIBRARY_DIRS=/usr/local/lib -DCMAKE_C_COMPILER=/usr/local/bin/mpicc -DCMAKE_CXX_COMPILER=/usr/local/bin/mpicxx -DCMAKE_Fortran_COMPILER=/usr/local/bin/mpif90 -DTrilinos_ENABLE_Epetra=ON -DTrilinos_ENABLE_AztecOO=ON -DTrilinos_ENABLE_Ifpack=ON -DTrilinos_ENABLE_EpetraEXT=ON -DTrilinos_ENABLE_Amesos=ON -DTrilinos_ENABLE_ML=ON -DTrilinos_ENABLE_MueLU=ON -DTrilinos_ENABLE_ROL=ON -DTrilinos_ENABLE_Sacado=ON -DTrilinos_ENABLE_Teuchos=ON -DTrilinos_ENABLE_Zoltan=ON -DTrilinos_ENABLE_Gtest=OFF /usr/local/sv/trilinos//Trilinos make -j6 install

PETsc

Line 522

PETsc is not used either by default.

cd /usr/local/sv mkdir petsc cd petsc git clone -b release https://gitlab.com/petsc/petsc.git cd petsc ./configure --prefix=/usr/local --with-debugging=0 --with-precision=double --download-suitesparse --download-mumps --download-superlu --download-superlu_dist --download-ml --download-eigen --download-hypre --with-mpi-dir=/usr/local --with-blas-lib=/usr/local/lib/libblas.so --with-lapack-lib=/usr/local/lib/liblapack.so --download-scalapack --download-metis --download-parmetis --with-strict-petscerrorcode --with-mpi-compilers=1 COPTFLAGS='-g -O' FOPTFLAGS='-g -O' CXXOPTFLAGS='-g -O' make PETSC_DIR=/usr/local/sv/petsc/petsc PETSC_ARCH=arch-linux-c-opt all make PETSC_DIR=/usr/local/sv/petsc/petsc PETSC_ARCH=arch-linux-c-opt install make PETSC_DIR=/usr/local PETSC_ARCH="" check

We skipped Google Test as well as Conda.

svFSIplus

This is not part of the Dockerfile - theyā€™ve constructed that as a base image ready to compile svFSIplus. Weā€™re going to complete that here. The instructions for svFSIplus are here:
https://simvascular.github.io/svFSIplus/index.html

cd /usr/local/sv mkdir svFSIplus-package cd svFSIplus-package git clone https://github.com/SimVascular/svFSIplus mkdir build cd build/ cmake /usr/local/sv/svFSIplus-package/svFSIplus/ make -j4 cd svFSI-build cp bin/* /usr/local/bin cp lib/* /usr/local/lib

The default instructions put svFSI under /usr/local/SV. We donā€™t want to have to worry about setting paths, so we manually copy the binaries to /usr/local/bin and libraries to /usr/local/lib.

Testing

Before we exit the container, we can test. The full documentation shows how to run the full suite of tests, which weā€™d do if we were running the container properly, with some modification. Weā€™ll pick a small one here to do manually to prove we compiled things OK.

cd /usr/local/sv/svFSIplus-package/svFSIplus cd tests/cases/fluid/pipe_RCR_3d svFSIplus svFSI.xml --------------------------------------------------------------------- Eq N-i T dB Ri/R1 Ri/R0 R/Ri lsIt dB %t --------------------------------------------------------------------- NS 1-1 5.700e-01 [0 1.000e+00 1.000e+00 8.349e-12] [158 -255 75] NS 1-2 1.714e+00 [-57 1.372e-03 1.372e-03 1.530e-11] [253 -106 89] NS 1-3 2.085e+00 [-125 5.068e-07 5.068e-07 1.716e-05] [117 -110 67] NS 1-4 2.208e+00 [-197 1.300e-10 1.300e-10 6.859e-02] [7 -27 4] NS 1-5 2.342e+00 [-220 8.919e-12 8.919e-12 1.000e+00] !0 0 0! NS 2-1 3.503e+00 [0 1.000e+00 2.856e+01 9.945e-13] [283 -129 90] NS 2-2 3.974e+00 [-75 1.586e-04 4.529e-03 1.945e-09] [143 -201 75] NS 2-3 4.363e+00 [-146 4.871e-08 1.391e-06 6.474e-06] [123 -119 70] NS 2-4 4.489e+00 [-216 1.483e-11 4.234e-10 1.795e-02] [11 -40 6] NS 2-5s 4.628e+00 [-251 2.663e-13 7.606e-12 1.000e+00] !0 0 0!

Cleanup and Packaging

Before we go, we should clean up the APT cache and database, where packages we installed on the first step were downloaded:

apt-get clean rm -rf /var/lib/apt/lists/*

We can then exit the container, by typing ā€œexitā€. That should land you back at the same terminal but outside the container. We left a large compilation tree inside the container under /usr/local/sv. We can keep it, but remove it from the container, or delete it. This example will move it for now, in case it needs to be referenced later.

mkdir svfsiplus-buildtree mv ubuntu22-svfsiplus/usr/local/sv svifsplus-buildtree

We then want to compile up the container into a SIF file.

apptainer build svFSIplus.sif ubuntu-svfsiplus

That leaves us with the svFSIplus.sif file, which we can copy someplace appropriate for our lab or ourselves. This one is only 773MB, so it could reside in our home directory (15GB quota) without taking up too much space.

To use the container, weā€™d simply call the binary we left in there, referencing the container wherever it resides:

apptainer run /project/engineering/svfsiplus/svfsiplus.sif svFSIplus svFSI.xml

Or, since svFSIplus uses OpenMPI for multicore:

apptainer run /project/engineering/svfsiplus/svfsiplus.sif mpirun -n 32 svFSIplus svFSI.xml

MPI can be used in two different ways in a container. This method, calling mpirun inside the container, makes svFSI run across 32 cores but only on one node. To have MPI work across nodes with Apptainer containers, mpirun executes outside the container. We do not cover that subject here.

That same executable line can be used in job script submissions in place of however youā€™d usually run your executable. That holds true for compiled binaries, or containerized Python environments, or most everything else.

Sample Job File

#BSUB -o svfsiplus.%J #BSUB -N #BSUB -J svFSIplusJob #BSUB -R '(!gpu)' #BSUB -R "span[hosts=1]" #BSUB -R "rusage[mem=128]" #BSUB -q cpu-compute #BSUB -n 32 apptainer run /project/engineering/svfsiplus/svfsiplus.sif mpirun -n $LSB_DJOB_NUMPROC svFSI.xml

The above file runs a job that:

  • Puts output in a file name svfsiplus.123456, where 123456 is the job number assigned (-0)

  • Sends an email when the job is done (-N)

  • Names the job svFSIplusJob in ā€œbjobsā€ output (-J)

  • Selects a node with 128GB of available RAM, avoiding hosts with GPUs and keeping all the slots on a single host (-R)

  • Selects a node on the cpu-compute queue (-q)

  • Selects a node with 32 free cores (-n)

It uses the environment variable $LSB_DJOB_NUMPROC so that you only have to change the number of cpu slots requested in one place, at the top of the file in the #BSUB -n line.

Related content

The LSF Scheduler
Read with this
Managing Apptainer/Singularity Containers
Managing Apptainer/Singularity Containers
More like this
Research Compute Cluster
Research Compute Cluster
Read with this
RIS-Supported Application Containers
RIS-Supported Application Containers
More like this
RIS Docker Workshop
RIS Docker Workshop
More like this
Docker and the RIS Compute Service
Docker and the RIS Compute Service
More like this