I have been trying to run NGsolve in parallel on an HPC and I’d like to know if it’s possible to use the launchpad download of NGsolve with MPI? I have been trying to compile it from source both on the cluster and in containers, but have run into errors on both that I have been unable to fix.
The setup on HPC clusters varies a lot, thus we do not offer prebuilt binaries for such environments.
Often, the default compiler on clusters is too old, which OS/compiler were you using?
For further hints I need your configuration (cmake) command and the complete command line output.
The best attempt I’ve had so far is building NGSolve in a singularity container with an Ubuntu environment. In it, I’ve installed the packages that are listed on the “Build on Linux” page as well as openmpi-bin, libopenmpi-dev, and numpy/scipy. My cmake command is:
The error message that I’ve received is quite long and I don’t know what parts are relevant, so I’ll attach a text file of the whole message. However, I think the important line is:
error: inlining failed in call to always_inline '__m256d _mm256_fmadd_pd(__m256d, __m256d, __m256d)': target specific option mismatch
I’ve searched for this error message myself and I’ve seen people suggest adding flags like “-msse4.1”, “-march=native”, “-march=nehalem”, and “-mavx” to CMAKE_CXX_FLAGS. I’ve tried this and have still gotten the same error.
Sorry to bother you again. Everything in the container is built and I’ve moved it to the HPC. I can successfully run the MPI tutorials provided in the source, but when I try to run my program, I get segmentation faults. Specifically, I get: