Version:

This page here has been created for the latest stable release of Salvus. You have chosen to view the documentation for another Salvus version. Please be aware there might be some small differences you have to account for.

MPI

SalvusCompute requires MPI. Our distributions come with the required MPI binaries and shared libraries and we recommend to use these on small single node workstations. local and ssh site types should use the MPI offered by the Mondaic Downloader. SalvusFlow is aware of the folder structure the Downloader creates and no further configuration is necessary!

Large HPC clusters on the other hand tend to have their own custom MPI distributions. Our packages will dynamically link in any MPI implementation following the MPI ABI Compatibility Initiative, of which most MPI vendors are a part of.

The only widely used MPI implementation that cannot be used with Salvus is OpenMPI. OpenMPI is not ABI compatible with Salvus. Most clusters should offer an alternative.

For this to work, two manual steps might be required:

  1. Loading an ABI compatible MPI module in the site's SalvusFlow config, e.g. modules_to_load = ["xxx-mpich-abi"]
  2. The loaded module should already set the correct $LD_LIBRARY_PATH. If it does not do that, manually set it, again in the SalvusFlow site config:
Copy
[[sites.some_site.environment_variable]]
    name = "LD_LIBRARY_PATH"
    value = "/path/to/some/lib/dir"

The value/path has to be the library folder containing the libmpi.so.12 shared library.

Most clusters offer Intel's MPI distribution, and this generally works well with Salvus.

PAGE CONTENTS