Building NGSolve with HYPRE support

Hello!

I want to experiment with some different preconditioners. In the documentation, somewhere the “hypre” and “hypre_ams” preconditioners are mentioned. I found in a semi-recent github commit how to configure cmake with hypre (cmake -DUSE_HYPRE ../ngsolve-src).

Pulling the most recent NGSolve version from GitHub (resulting in version NGSolve-6.2.2001-6-g8bbe2629), this command runs fine and downloads hypre, but upon running the subsequent make, I get the following error:

[ 62%] Building CXX object comp/CMakeFiles/ngcomp.dir/bddc.cpp.o ngsolve/ngsolve-src/comp/bddc.cpp:317:28: error: reference to 'MPI_Op' is ambiguous AllReduceDofData (weight, MPI_SUM, fes->GetParallelDofs()); ^ /usr/local/include/mpi.h:1130:40: note: expanded from macro 'MPI_SUM' #define MPI_SUM OMPI_PREDEFINED_GLOBAL(MPI_Op, ompi_mpi_op_sum) ^ /usr/local/include/mpi.h:406:27: note: candidate found by name lookup is 'MPI_Op' typedef struct ompi_op_t *MPI_Op; ^ /Applications/Netgen.app/Contents/Resources/include/core/mpi_wrapper.hpp:265:15: note: candidate found by name lookup is 'ngcore::MPI_Op' typedef int MPI_Op; ^ ngsolve/ngsolve-src/comp/bddc.cpp:317:28: error: static_cast from 'void *' to 'ngcore::MPI_Op' (aka 'int') is not allowed AllReduceDofData (weight, MPI_SUM, fes->GetParallelDofs()); ^~~~~~~ /usr/local/include/mpi.h:1130:17: note: expanded from macro 'MPI_SUM' #define MPI_SUM OMPI_PREDEFINED_GLOBAL(MPI_Op, ompi_mpi_op_sum) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /usr/local/include/mpi.h:381:47: note: expanded from macro 'OMPI_PREDEFINED_GLOBAL' #define OMPI_PREDEFINED_GLOBAL(type, global) (static_cast<type> (static_cast<void *> (&(global)))) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2 errors generated.
Any idea how to get around this compile error?

It looks like you did not turn on MPI (-DUSE_MPI=ON). This is needed by hypre.

We should throw an error when MPI is turned off and hypre is turned on.

Hey! Yes, perfect, that worked :slight_smile:

I installed NGSolve from source with HYPRE and MPI support on two machines, Ubuntu 18 and MacOS 10.14.
Whenever I run anything, I get the following error:

$ netgen navierstokes.py NETGEN-6.2-dev Developed by Joachim Schoeberl at 2010-xxxx Vienna University of Technology 2006-2010 RWTH Aachen University 1996-2006 Johannes Kepler University Linz Including MPI version 3.1 Problem in Tk_Init: result = no display name and no $DISPLAY environment variable optfile ./ng.opt does not exist - using default values togl-version : 2 no OpenGL loading ngsolve library NGSolve-6.2.2001-11-g0929bd80 Using Lapack Including sparse direct solver UMFPACK Running parallel using 1 thread(s) (should) load python file 'navierstokes.py' loading ngsolve library NGSolve-6.2.2001-11-g0929bd80 Using Lapack Including sparse direct solver UMFPACK Running parallel using 1 thread(s) Caught SIGSEGV: segmentation fault

I realize this is probably very difficult to debug, also for you, so I have a more general question: is there any way of getting a stack trace at this point? Running this through valgrind produces another very mysterious error:[code]
$ valgrind !!
valgrind netgen navierstokes.py
==16773== Memcheck, a memory error detector
==16773== Copyright (C) 2002-2017, and GNU GPL’d, by Julian Seward et al.
==16773== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==16773== Command: netgen navierstokes.py
==16773==
vex amd64->IR: unhandled instruction bytes: 0x62 0xF1 0x7D 0x28 0xEF 0xC0 0x83 0xFE 0x8 0xB8
vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x0 ESC=NONE
vex amd64->IR: PFX.66=0 PFX.F2=0 PFX.F3=0
==16773== valgrind: Unrecognised instruction at address 0x5f81400.
==16773== at 0x5F81400: __mutex_base (std_mutex.h:68)
==16773== by 0x5F81400: mutex (std_mutex.h:94)
==16773== by 0x5F81400: netgen::BlockAllocator::BlockAllocator(unsigned int, unsigned int) (optmem.cpp:20)
==16773== by 0x5D68679: __static_initialization_and_destruction_0 (localh.cpp:27)
==16773== by 0x5D68679: _GLOBAL__sub_I_localh.cpp (localh.cpp:800)
==16773== by 0x4010732: call_init (dl-init.c:72)
==16773== by 0x4010732: _dl_init (dl-init.c:119)
==16773== by 0x40010C9: ??? (in /lib/x86_64-linux-gnu/ld-2.27.so)
==16773== by 0x1: ???
==16773== by 0x1FFF00047E: ???
==16773== by 0x1FFF000485: ???
==16773== Your program just tried to execute an instruction that Valgrind
==16773== did not recognise. There are two possible reasons for this.
==16773== 1. Your program has a bug and erroneously jumped to a non-code
==16773== location. If you are running Memcheck and you just saw a
==16773== warning about a bad jump, it’s probably your program’s fault.
==16773== 2. The instruction is legitimate but Valgrind doesn’t handle it,
==16773== i.e. it’s Valgrind’s fault. If you think this is the case or
==16773== you are not sure, please let us know and we’ll try to fix it.
==16773== Either way, Valgrind will now raise a SIGILL signal which will
==16773== probably kill your program.
Caught SIGILL: illegal instruction

==16773==
==16773== HEAP SUMMARY:
==16773== in use at exit: 2,916 bytes in 62 blocks
==16773== total heap usage: 87 allocs, 25 frees, 881,786 bytes allocated
==16773==
==16773== LEAK SUMMARY:
==16773== definitely lost: 0 bytes in 0 blocks
==16773== indirectly lost: 0 bytes in 0 blocks
==16773== possibly lost: 160 bytes in 2 blocks
==16773== still reachable: 2,756 bytes in 60 blocks
==16773== suppressed: 0 bytes in 0 blocks
==16773== Rerun with --leak-check=full to see details of leaked memory
==16773==
==16773== For counts of detected and suppressed errors, rerun with: -v
==16773== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)[/code]

In the mean time, I will try to checkout the NGSolve repo at the moment HYPRE support was first introduced, and see if that fixes any of my problems.

Thank you for your continued support :slight_smile:

This valgrind error happens when valgrind does not know some instruction. That happens occasionally on newer hardware and might not be related to the segfault you are getting. Could you try gdb instead?

Also, could you try “ngspy navierstokes.py”? It looks like there is some error with the GUI.

Best,
Lukas

Hey Lukas, yeah I figured but am more familiar with valgrind than GDB. Thanks for the proposal, I will try to learn how to use it.

ngspy navierstokes.py runs :slight_smile:

Now the interesting stuff begins: when I take the preconditioner example from 2.1.1 Preconditioners in NGSolve — NGS-Py 6.2.2302 documentation and run it with some builtin preconditioner like “local” or “h1amg”, it runs fine. When I run it with “hypre”, I get the following error:

$ ngspy precond_test.py
 Generate Mesh from spline geometry
 Boundary mesh done, np = 8
 CalcLocalH: 8 Points 0 Elements 0 Surface Elements
 Meshing domain 1 / 1
 load internal triangle rules
 Surface meshing done
 Edgeswapping, topological
 Smoothing
 Split improve
 Combine improve
 Smoothing
 Edgeswapping, metric
 Smoothing
 Split improve
 Combine improve
 Smoothing
 Edgeswapping, metric
 Smoothing
 Split improve
 Combine improve
 Smoothing
 Update mesh topology
 Update clusters
assemble VOL element 6/6
assemble VOL element 6/6
Setup Hypre preconditioner
Traceback (most recent call last):
  File "precond_test.py", line 61, in <module>
    print(SolveProblem(levels=5, precond="hypre"))
  File "precond_test.py", line 41, in SolveProblem
    a.Assemble()
netgen.libngpy._meshing.NgException: std::bad_cast
 in Assemble BilinearForm 'biform_from_py'

Unfortunately, the exception seems to be caught by Python or something before the process exists, because running it through GDB produces no stack trace. Any clues?

in the shell, run:
“gdb python3”
in gdb, run:
“set breakpoint pendong on”
“break RangeException”
“run navierstokes.py”

I cannot reproduce this error with the newest Netgen/NGSolve version.

Are you, by any chance, running on a computer with AVX512? And which compiler are you using? We recently ran into issues with gcc 9.2 andf AVX512, but now this combination should throw an error.

Hey Lukas,

I dont think this Ubuntu machine has AVX512, and I think this machine is running gcc 7.4.0.

So you are able to run the attached file? That’s super strange, haha.

I’m assuming you meant to type precond_test.py here; in any case, i managed to set a catchpoint to the bad_cast. The stack trace is

Setup Hypre preconditioner

Thread 1 "python3" hit Catchpoint 2 (exception thrown), 0x00007fffee3a4ced in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
(gdb) bt
#0  0x00007fffee3a4ced in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x00007fffee3a3a52 in __cxa_bad_cast () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#2  0x00007fffe4b07e46 in ngcomp::HyprePreconditioner::Setup (this=this@entry=0x1cdce60, matrix=...) at /data/ngsolve/ngsolve-src/comp/hypre_precond.cpp:67
#3  0x00007fffe4b08b75 in ngcomp::HyprePreconditioner::FinalizeLevel (this=0x1cdce60, mat=0x1cddb08) at /data/ngsolve/ngsolve-src/comp/hypre_precond.cpp:56
#4  0x00007fffe482f70f in ngcomp::S_BilinearForm<double>::DoAssemble (this=0x1cdb5a0, clh=...) at /data/ngsolve/ngsolve-src/comp/bilinearform.cpp:2386
#5  0x00007fffe47df412 in ngcomp::BilinearForm::Assemble (this=0x1cdb5a0, lh=...) at /data/ngsolve/ngsolve-src/comp/bilinearform.cpp:688
#6  0x00007fffe47e06c4 in ngcomp::BilinearForm::ReAssemble (this=<optimized out>, lh=..., reallocate=<optimized out>, reallocate@entry=false) at /data/ngsolve/ngsolve-src/comp/bilinearform.cpp:840
#7  0x00007fffe4cdb4f7 in <lambda(BF&, bool)>::operator() (reallocate=false, self=..., __closure=<optimized out>) at /data/ngsolve/ngsolve-src/comp/python_comp.cpp:2090
#8  pybind11::detail::argument_loader<ngcomp::BilinearForm&, bool>::call_impl<void, ExportNgcomp(pybind11::module&)::<lambda(BF&, bool)>&, 0, 1, pybind11::gil_scoped_release> (f=..., this=0x7fffffffd5f0) at /data/ngsolve/ngsolve-install/include/pybind11/cast.h:1962
#9  pybind11::detail::argument_loader<ngcomp::BilinearForm&, bool>::call<void, pybind11::gil_scoped_release, ExportNgcomp(pybind11::module&)::<lambda(BF&, bool)>&> (f=..., this=0x7fffffffd5f0) at /data/ngsolve/ngsolve-install/include/pybind11/cast.h:1944
#10 pybind11::cpp_function::<lambda(pybind11::detail::function_call&)>::operator() (__closure=0x0, call=...) at /data/ngsolve/ngsolve-install/include/pybind11/pybind11.h:159
#11 pybind11::cpp_function::<lambda(pybind11::detail::function_call&)>::_FUN(pybind11::detail::function_call &) () at /data/ngsolve/ngsolve-install/include/pybind11/pybind11.h:137
#12 0x00007fffe4a87d47 in pybind11::cpp_function::dispatcher (self=<optimized out>, args_in=(<ngsolve.comp.BilinearForm at remote 0x7fffec328960>,), kwargs_in=0x0) at /data/ngsolve/ngsolve-install/include/pybind11/pybind11.h:624
#13 0x00000000005674fc in _PyCFunction_FastCallDict () at ../Objects/methodobject.c:231
#14 0x000000000050abb3 in call_function.lto_priv () at ../Python/ceval.c:4875
#15 0x000000000050c5b9 in _PyEval_EvalFrameDefault () at ../Python/ceval.c:3335
#16 0x0000000000508245 in PyEval_EvalFrameEx (throwflag=0,
    f=Frame 0x149e628, for file precond_test.py, line 42, in SolveProblem (h=<float at remote 0x7ffff7f703a8>, p=1, levels=5, condense=False, precond='hypre', mesh=<ngsolve.comp.Mesh at remote 0x7fffdad99bf8>, fes=<ngsolve.comp.H1 at remote 0x7fffe6222f10>, u=<ngsolve.comp.ProxyFunction at remote 0x7fffda21c678>, v=<ngsolve.comp.ProxyFunction at remote 0x7fffdadb7990>, a=<ngsolve.comp.BilinearForm at remote 0x7fffec328960>, f=<ngsolve.comp.LinearForm at remote 0x7ffff57dd848>, gfu=<ngsolve.comp.GridFunction at remote 0x7fffda21c7d8>, c=<ngsolve.comp.Preconditioner at remote 0x7ffff57dd9d0>, steps=[], l=0)) at ../Python/ceval.c:754
#17 _PyEval_EvalCodeWithName.lto_priv.1836 () at ../Python/ceval.c:4166
#18 0x000000000050a080 in fast_function.lto_priv () at ../Python/ceval.c:4992
#19 0x000000000050aa7d in call_function.lto_priv () at ../Python/ceval.c:4872
#20 0x000000000050d390 in _PyEval_EvalFrameDefault () at ../Python/ceval.c:3351
#21 0x0000000000508245 in PyEval_EvalFrameEx (throwflag=0, f=Frame 0xaeeb08, for file precond_test.py, line 64, in <module> ()) at ../Python/ceval.c:754
#22 _PyEval_EvalCodeWithName.lto_priv.1836 () at ../Python/ceval.c:4166
#23 0x000000000050b403 in PyEval_EvalCodeEx (closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwcount=0, kws=0x0, argcount=0, args=0x0, locals=<optimized out>, globals=<optimized out>, _co=<optimized out>) at ../Python/ceval.c:4187
#24 PyEval_EvalCode (co=<optimized out>, globals=<optimized out>, locals=<optimized out>) at ../Python/ceval.c:731
#25 0x0000000000635222 in run_mod () at ../Python/pythonrun.c:1025
#26 0x00000000006352d7 in PyRun_FileExFlags () at ../Python/pythonrun.c:978
#27 0x0000000000638a8f in PyRun_SimpleFileExFlags () at ../Python/pythonrun.c:419
#28 0x0000000000638c65 in PyRun_AnyFileExFlags () at ../Python/pythonrun.c:81
#29 0x0000000000639631 in run_file (p_cf=0x7fffffffe0dc, filename=<optimized out>, fp=<optimized out>) at ../Modules/main.c:340
#30 Py_Main () at ../Modules/main.c:810
#31 0x00000000004b0f40 in main (argc=2, argv=0x7fffffffe2d8) at ../Programs/python.c:69

so it looks like the BaseMatrix &matrix is not actually a ParallelMatrix?

Attachment: precond_test.py

Ah, my bad, I thought your last error was still from navierstokes.py!

In this case, the problem seems to be that the hypre preconditioner ONLY works with MPI (or, more precicely, the interface NGSolve-hypre is written such that it does not work in the non-MPI case). It does not even throw an exception and exit gracefully but just crashes on the invalid cast …

Also, refining distributed meshes only works uniformly, and only before spaces are defined on the mesh.

As an additional remark, hypre only really works well for order 1 discretizations. That is typical for AMG solvers. An option would be to combine this with an additive block-jacobi for the high order part.

Best, Lukas

Attachment: precond_test_2020-02-05.py

Heya,

OK, the fact that distributed meshes can only by refined uniformly does hamper my intended application. However maybe with only 1 process, the mesh is not really distributed? In any case, I’d like to try getting HYPRE to work, even if I can’t do adaptive refinement.

I found how to run NGSolve with MPI (mpiexec -np N ngspy precond_test.py) but whatever N I choose, I get some error that looks like this. Am I not running MPI correctly? Do you have any idea?

Best,
Jan

$ mpiexec -np 1 ngspy precond_test.py
 Generate Mesh from spline geometry
 Boundary mesh done, np = 8
 CalcLocalH: 8 Points 0 Elements 0 Surface Elements
 Meshing domain 1 / 1
 load internal triangle rules
 Surface meshing done
 Edgeswapping, topological
 Smoothing
 Split improve
 Combine improve
 Smoothing
 Edgeswapping, metric
 Smoothing
 Split improve
 Combine improve
 Smoothing
 Edgeswapping, metric
 Smoothing
 Split improve
 Combine improve
 Smoothing
 Update mesh topology
 Update clusters
assemble VOL element 6/6
assemble VOL element 6/6
Setup Hypre preconditioner
Traceback (most recent call last):
  File "precond_test.py", line 65, in <module>
    print(SolveProblem(levels=5, precond="hypre"))
  File "precond_test.py", line 45, in SolveProblem
    a.Assemble()
netgen.libngpy._meshing.NgException: std::bad_cast
 in Assemble BilinearForm 'biform_from_py'

-------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpiexec detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[29848,1],0]
  Exit code:    1
--------------------------------------------------------------------------

The hypre interface only works with -np 2 or more. It is just programmed that way.

When you run with -np 2 or more, you need to generate the mesh on the master and then distribute it (see the file attached to my last comment).

Best,
Lukas

As a workaround, you could try to use hypre through the PETSc interface, that should also work with only one rank. The overhead should not be substantially worse.

Best,
Lukas

Hey Lukas, ah I didn’t catch your earlier attached file. Thank you very much; I now have something that works more-or-less. I will play around with the PETSc interface.

For you all my questions have been answered!!

Hiya,

I shelved the MPI project for a while, but am back again experimenting. I cloned the latest NGSolve build, compiled with -DUSE_MPI=ON -DUSE_HYPRE=ON and ran Lukas’ attached precond_test_2020-02-05.py as mpirun -np 4 ngspy precond_test_2020-02-05.py. I get a segfault, but this time it does give me a backtrace:

#1       at std::__1::basic_stringbuf<char, std::__1::char_traits<char>, std::__1::allocator<char> >::str() const (in libngcore.dylib) (/Library/Developer/CommandLineTools/usr/include/c++/v1/new:252)
#2       at 0x0000a3ac (in libsystem_platform.dylib)
#3
#4       at HYPRE_IJMatrixGetValues (in libngcomp.dylib) + 125
#5       at HYPRE_IJMatrixAddToValues2 (in libngcomp.dylib) + 131
#6       at ngcomp::HyprePreconditioner::Mult(ngla::BaseVector const&, ngla::BaseVector&) const (in libngcomp.dylib) (/Applications/Netgen.app/Contents/Resources/include/core/profiler.hpp:157)
#7       at ngcomp::HyprePreconditioner::Mult(ngla::BaseVector const&, ngla::BaseVector&) const (in libngcomp.dylib) (/Applications/Netgen.app/Contents/Resources/include/core/mpi_wrapper.hpp:72)
#8       at ngcomp::S_BilinearForm<std::__1::complex<double> >::AddMatrixTP(std::__1::complex<double>, ngla::BaseVector const&, ngla::BaseVector&, ngcore::LocalHeap&) const (in libngcomp.dylib) (/Library/Developer/CommandLineTools/usr/include/c++/v1/memory:2153)
#9       at ngcomp::BilinearForm::Assemble(ngcore::LocalHeap&) (in libngcomp.dylib) (/Library/Developer/CommandLineTools/usr/include/c++/v1/__locale:234)
#10
#11      at std::__1::vector<bool, std::__1::allocator<bool> >::resize(unsigned long, bool) (in libngcomp.dylib) (/Library/Developer/CommandLineTools/usr/include/c++/v1/__bit_reference:0)
#12      at _PyObject_CallFunctionVa (in Python) + 0
#13      at _PyFunction_FastCallKeywords (in Python) + 267
#14      at call_function (in Python) + 1051
#15      at convertitem (in Python) + 8476
#16      at compiler_enter_scope (in Python) + 242
#17      at PyObject_Call (in Python) + 114
#18      at _PyEval_EvalCodeWithName (in Python) + 675
#19      at convertitem (in Python) + 9379
#20      at compiler_enter_scope (in Python) + 242
#21      at PyEval_EvalFrame (in Python) + 11
#22      at PyRun_StringFlags (in Python) + 100
#23      at PyRun_InteractiveOneObject (in Python) + 11
#24      at ScandirIterator_exit (in Python) + 0
#25      at Py_GetArgcArgv (in Python) + 19
#26      at dyld3::OverflowSafeArray<dyld3::closure::Image::BindPattern, 4294967295ul>::growTo(unsigned long) (in libdyld.dylib) + 56

Any ideas?

I cannot reproduce this issue on any machine I tried it on.

Can you tell me what OS and compiler (+version) you are using?

Best,
Lukas

$ g++ --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/c++/4.2.1
Apple clang version 11.0.0 (clang-1100.0.33.17)
Target: x86_64-apple-darwin18.5.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

$ sw_vers -productVersion
10.14.4

What is kind of strange, though, is that nsgpy (and the entirety of the Netgen.app`) seems to be linked with python 3.7,

$ ngspy
Python 3.7.4 (v3.7.4:e09359112e, Jul  8 2019, 14:54:52)

$ python3
Python 3.8.5 (v3.8.5:580fbb018f, Jul 20 2020, 12:11:27)

even though I followed the website to install python 3.8 (the two versions now live side-by-side, though). Could this be a source of problems?

BTW, when I run mpirun -np 4 ngspy mpi_poisson.py with “hypre”, I get the same problem. When I use the “bddc” preconditioner (and masterinverse), it runs OK. When I substitute ngspy with python3.7, things also run fine, but with python3 (so python3.8), I again get a segfault, though at a much earlier stage. The stack trace is

$ mpirun -np 4 python3.8 mpi_poisson.py
Caught SIGSEGV: segmentation fault
Collecting backtrace...
Caught SIGSEGV: segmentation fault
Collecting backtrace...
Caught SIGSEGV: segmentation fault
Collecting backtrace...
Caught SIGSEGV: segmentation fault
Collecting backtrace...
#1       at std::__1::basic_stringbuf<char, std::__1::char_traits<char>, std::__1::allocator<char> >::str() const (in libngcore.dylib
) (/Library/Developer/CommandLineTools/usr/include/c++/v1/new:252)
#2       at 0x0000a3ac (in libsystem_platform.dylib)
#3       at we_askshell.cmd (in libsystem_c.dylib) + 974
#4       at 0x00005ede (in libngpy.so)
#5       at PyInit_libngpy (in libngpy.so) (<ngsolve dir>/ngsolve-src/external_dependencies/netgen/ng/netgenp
y.cpp:33)
#6       at _PyWideStringList_Copy (in Python) + 47
#7       at _imp__fix_co_filename (in Python) + 64
#8       at cfunction_vectorcall_FASTCALL_KEYWORDS (in Python) + 112
#9       at PyVectorcall_Call (in Python) + 260
#10      at PyErr_SetFromErrnoWithFilenameObjects (in Python) + 100
#11      at PyCodec_ReplaceErrors (in Python) + 559
#12      at _PyMethodDef_RawFastCallDict (in Python) + 331
#13      at call_function (in Python) + 1087
#14      at context_tp_new (in Python) + 19
#15      at _PyFunction_Vectorcall (in Python) + 19
#16      at call_function (in Python) + 1087
#17      at context_tp_richcompare (in Python) + 93
#18      at _PyFunction_Vectorcall (in Python) + 19
.....
#88      at Py_BytesMain (in Python) + 62

In principle python3.7 also works. However, compiling ngsolve with python3.7 and then running with 3.8 can absolutely lead to problems.

I will try it with clang and get back to you.

Best,
Lukas

Thanks alot :slight_smile: if you can’t reproduce it with clang either, I can move to a linux machine.

I tried it with clang-11 now, still no problem here.

I am now thinking that it might be connected to the fortran compiler (MUMPS is a fortran library). What are you using there?

Best,
Lukas