OpenFOAM

Versions available

OpenFOAM version Compute nodes Pre/post-processing nodes Compilation instructions
2.1.1 no no yes
2.1.X no no yes
2.2.2 yes yes yes
2.3.0 yes no no
2.4.0 yes no yes
3.0.1 yes no yes
4.1 yes yes yes

It is recommended that you use the latest version. Good reasons to continue using an old version are

  • you are half way through a long series of simulations and need stability more than correctness
  • you have validated the old version against experimental results and are now doing simulations using that validated version
  • you have developed code against that version — you should look at moving it to a later version when possible
  • a bug has been introduced in a later version (an example was the parallel version of mapFields in 2.3.x and 2.4.x: the serial version from 2.2.x is now used in OpenFOAM 3.0.x) — please report the bug to OpenFOAM
  • a later version is not backwards compatible with your cases — you should look at modifying your case files

Compute nodes have MPI and increased vectorisation (AVX), use them for

  • parallel solvers (e.g. icoFoam)
  • parallel utilities (e.g. snappyHexMesh)
  • serial utilities as part of a job (e.g. blockMesh)

Pre/post-processing nodes nodes have lots of memory, use them for

  • pre- and post-processing that requires large amounts of memory (e.g. reconstructPar)

Using OpenFOAM

Some environment variables need to be set using the OpenFOAM setup script, and then you can use OpenFOAM as normal. The FOAM_INST_DIR and WM_PROJECT_SITE variables are unset so as to use the default values in the OpenFOAM setup script.

2.2.2

module swap PrgEnv-cray PrgEnv-gnu
unset FOAM_INST_DIR WM_PROJECT_SITE
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-2.2.2/etc/bashrc

2.2.2 for the pre/post-processing nodes

module swap PrgEnv-cray PrgEnv-gnu
unset FOAM_INST_DIR WM_PROJECT_SITE
source /work/y07/y07/cse/OpenFOAM/SerialNodes/OpenFOAM-2.2.2/etc/bashrc

Note, the pre/post-processing node version of OpenFOAM does not have parallel functionality enabled, it is designed for using OpenFOAM applications for pre- and post-processing of data.

2.3.0

module swap PrgEnv-cray PrgEnv-gnu
unset FOAM_INST_DIR WM_PROJECT_SITE
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-2.3.0/etc/bashrc

2.4.0

module load openfoam/2.4.0
unset FOAM_INST_DIR WM_PROJECT_SITE
source $OPENFOAM_DIR/OpenFOAM-2.4.0/etc/bashrc

The Gnu programming environment is loaded in the openfoam/2.4.0 module. Please read the help information

module help openfoam/2.4.0

You can scroll through this with

module help openfoam/2.4.0 0>&1 | less

3.0.1

module swap PrgEnv-cray PrgEnv-gnu
unset FOAM_INST_DIR WM_PROJECT_SITE
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-3.0.1/etc/bashrc

4.1

module swap PrgEnv-cray PrgEnv-gnu
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-4.1/etc/bashrc

4.1for the pre/post-processing nodes

module swap PrgEnv-cray PrgEnv-gnu
source /work/y07/y07/cse/OpenFOAM/SerialNodes/OpenFOAM-4.1/etc/bashrc

Note, the pre/post-processing node version of OpenFOAM does not have parallel functionality enabled, it is designed for using OpenFOAM applications for pre- and post-processing of data.

Usage notes

Programming environment

All the versions of OpenFOAM that are installed use the Gnu compiler, so the programming environment needs to be PrgEnv-gnu. Any applications or libraries that you build must also be built with the PrgEnv-gnu programming environment.

aprun

The ARCHER command to start an MPI program on the compute nodes is aprun, so run your parallel OpenFOAM simulations using, for example

aprun -n 2400 icoFOAM -parallel &> icofoam.log

Use aprun also for serial utilities that are run as part of a job on the compute nodes, for example

aprun -n 1 decomposePar &> decompose.log

/work

All files need to be on /work to be accessible from the compute nodes:

  • case directories and files (FOAM_RUN)
  • user applications (FOAM_USER_APPBIN)
  • user libraries (FOAM_USER_LIBBIN)

/work is not backed up:

  • Use a version control system (subversion, git) and backed-up repository for your source files (applications and libraries).
  • When working on a case, mirror your dictionaries and constant data to the RDF using rsync.
  • You may want to mirror the case results as well. OpenFOAM produces large numbers of files for the case results, so tar these to create one large tar file and copy that to the RDF (large files are better for the RDF backup process: the RDF backup slows down when large numbers of files have been newly copied to the RDF).
  • When you have finished working on a case and want to archive it, tar the case directory to create one large tar file and copy that to the RDF or to your home institution. If you mirrored intermediate case results, you can usually remove those from the RDF since the results will be in the case directory on /work. Once the backup from RDF to tape has completed (this can take 2 days: check with the ARCHER Helpdesk if there are any delays) you can delete the case directory on /work.

OpenFOAM and Lustre

Each process in an OpenFOAM parallel simulation writes one file for each output field at each output time:

number of files = number of output fields x number of output times x number of processes

which can quickly lead to large numbers of small files. Some users of OpenFOAM on ARCHER have produced millions of files in the course of a project.

/work is a Lustre file system. Lustre is optimised for reading and writing small numbers of large files (Configuring the Lustre /work file system):

  • opening and closing large numbers of files can be slow
  • large numbers of processes reading or writing files can contend for access to the file system

Currently (even in OpenFOAM 3.0.1) there is no general OpenFOAM output method that combines the output into one file (or one file per output time), so here are some suggestions to improve the read/write performance. You should measure performance before and after any change to see if there has been any improvement.

(There is a contributed HDF5 library in the OpenFOAM wiki which may be useful for particular applications. There are some limitations: the library currently is for OpenFOAM 2.2.x and does not write out boundary conditions, polyhedra, or restart information. The HDF5 library has not been installed or tested on ARCHER.)

  • Set the stripe count of your OpenFOAM user directory to 1 (this does not require any change in your OpenFOAM configuration). For example
    lfs setstripe -c 1 /work/z01/z01/mjf/OpenFOAM/mjf-2.4.0
    
    The stripe count is inherited by all files and directories that you create in or copy to that directory. If you create case directories in that directory they will have stripe count 1, and so will all the OpenFOAM output files in the case. Note that files that are moved using mv keep the stripe count they originally had.
  • If you find that reading and writing files takes a significant fraction of your job time, you can change the input and/or output settings in controlDict. Some of these suggestions may not be possible, for example you may need to output fields with high time resolution to analyse a process.
    • Increase writeInterval
    • Use binary format for the fields: writeFormat binary
    • For steady-state solutions, overwrite the output at each output time: purgeWrite 1
    • Don't read dictionaries at every time-step (only one process reads the dictionaries, so this should have a small effect): runTimeModifiable no
  • OpenFOAM is dynamically linked and has dynamically loaded libraries (libs and functionObjectLibs) and run-time code compilation (codeStream). Each process opens these shared objects (.so) and reads (via mmap) parts of an object as they are needed, for example when a function is called. Some of these shared objects are on /work so there can be many accesses to many small files, which may be slow. If you find that starting up an OpenFOAM simulation takes a large fraction of the run time, the DLFM package may help, please contact the ARCHER Helpdesk for assistance.

ParaView

There is a centrally-installed version of ParaView (module load paraview), so OpenFOAM has not been built with ParaView. This means that the file readers and user interface panel provided by OpenFOAM are not available — use ParaView's built-in versions. In ParaView's built-in versions you cannot view patch names, but you can load decomposed cases directly.

There four ways to split the ParaView visualisation between ARCHER and your desktop, each with a different balance between convenience and performance.

choice Rendering User interface
transfer the results to your desktop and run ParaView there fast (if you have a good graphics card) fast
Run pvserver in parallel on the compute nodes with a ParaView client on your desktop (using ParaView on the compute nodes can be difficult to arrange for a particular time) fast (uses kAUs) fast
Run pvserver on a post-processing node with a ParaView client on your desktop slow fast
Run ParaView on a post-processing node with an X connection to your desktop slow slow

Compiling OpenFOAM

Compiling OpenFOAM takes 9 hours: do you really need to compile the whole of OpenFOAM? If you are modifying an application or library, you can

  • set up the installed version of OpenFOAM as normal(Using OpenFOAM)
  • copy the application or library source directory to your user OpenFOAM directory $WM_PROJECT_USER_DIR (this must be on /work)
  • make your modifications
  • compile the new application or library

See Section 3.2 of the OpenFOAM User Guide for details

If you do need to compile the whole of OpenFOAM, below are links to pages that outline how we built OpenFOAM 2.2.2 on ARCHER, how we suggest you build OpenFOAM 2.1.X versions on ARCHER, how to build OpenFOAM 2.4.0 on ARCHER, and how we built OpenFOAM 3.0.1 on ARCHER:

Sample job submission script for OpenFOAM

A sample job submission script for OpenFOAM 2.2.2 is

#!/bin/bash --login

#PBS -N job_name
#PBS -l select=number_of_nodes
#PBS -l walltime=23:00:00
#PBS -A your_budget_code
#PBS -q queue_name
#PBS -m abe

module swap PrgEnv-cray PrgEnv-gnu
unset FOAM_INST_DIR WM_PROJECT_SITE
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-2.2.2/etc/bashrc

aprun -n 24 solvername -parallel

where you would replace solvername with the specific openfoam executable you want to use. You can see which OpenFOAM executables have been installed with

ls $FOAM_APPBIN

Scripts for other versions of OpenFOAM are similar, using the version specific environment setup shown in Using OpenFOAM. Version 2.4.0 has another example in $OPENFOAM_DIR/example-OpenFOAM-2.4.0.bash (available after loading the openfoam/2.4.0 module).