Inexact Block preconditioning

Hi community, first post here.

I am trying to develop an optimal parallel solver for stokes, for which I am using as a basis the block formulation from the documentation. I have modified the original code to use the following block preconditioner (in the stabilized formulation, P1xP1):

Qjacobi = Preconditioner(mp, "local")
Amg = Preconditioner(a, "bddc")
a.Assemble()
mp.Assemble()
K = BlockMatrix( [ [a.mat, b.mat.T], [b.mat, c.mat] ] )
C = BlockMatrix( [ [Amg.mat, None], [None, Qjacobi.mat] ] )

This works, and iteration counts are indeed optimal. Still, when I increase the number of processors (from 2 to 16), execution times don’t change at all. I suspect that bddc does not support parallel MPI execution, so I tested HYPRE, which works fine on the poisson problem. My issue is that if I try to do the same here with

Amg = Preconditioner(a, "hypre")

then the code freezes and never goes beyond showing the parameters of HYPRE. Any idea of what could be happening would be great.

Thanks!

Hi, can you provide a minimal example?

Sure, here it goes:

from ngsolve import *
from netgen.occ import *
from mpi4py import MPI

comm = MPI.COMM_WORLD
# Geometry
shape = Rectangle(2,0.41).Circle(0.2,0.2,0.05).Reverse().Face()
shape.edges.name="wall"
shape.edges.Min(X).name="inlet"
shape.edges.Max(X).name="outlet"

geo = OCCGeometry(shape, dim=2)
if comm.rank == 0:
    mesh = Mesh(geo.GenerateMesh(maxh=0.05).Distribute(comm))
else:
    mesh = Mesh(netgen.meshing.Mesh.Receive(comm))

V = VectorH1(mesh, order=2, dirichlet="wall|inlet|cyl")
Q = H1(mesh, order=1)
X = V * Q
u,v = V.TnT()
p,q = Q.TnT()
a = BilinearForm(InnerProduct(Grad(u),Grad(v))*dx, symmetric=True)
b = BilinearForm(div(u)*q*dx).Assemble()
h = specialcf.mesh_size
c = BilinearForm(-0.1*h*h*grad(p)*grad(q)*dx, symmetric=True).Assemble()

mp = BilinearForm(p*q*dx, symmetric=True)
f = LinearForm(V).Assemble()
g = LinearForm(Q).Assemble();
gfu = GridFunction(V, name="u")
gfp = GridFunction(Q, name="p")
uin = CoefficientFunction( (1.5*4*y*(0.41-y)/(0.41*0.41), 0) )
gfu.Set(uin, definedon=mesh.Boundaries("inlet"))


Qjacobi = Preconditioner(mp, "local")
Amg = Preconditioner(a, "bddc")
a.Assemble() 
mp.Assemble()
f.Assemble()
g.Assemble()
K = BlockMatrix( [ [a.mat, b.mat.T], [b.mat, c.mat] ] )
C = BlockMatrix( [ [Amg.mat, None], [None, Qjacobi.mat] ] )

rhs = BlockVector ( [f.vec, g.vec] )
sol = BlockVector( [gfu.vec, gfp.vec] )

solvers.MinRes (mat=K, pre=C, rhs=rhs, sol=sol, printrates=comm.rank==0, initialize=False,     maxsteps=500);

This code does not work if I change “bddc” to “hypre”. I tried using hypre with a poisson example and it runs ok (only in parallel, n>1), but using it block-wise gives this error.

Thanks for looking into this!

Ah, I don’t think this hypre binding is only working for scalar H1 space not vector. i think a better way here is to wrap the matrix to petsc and use the preconditioners there (there is also hypre available through petsc)

A short intro into coupling with petsc is here:
https://docu.ngsolve.org/v6.2.2103/i-tutorials/unit-5a.3-petsc/PETSc_interface.html#PETSc-preconditioner-for-NGSolve

I think what @uzerbinati might be interesting for you:

https://ngspetsc.readthedocs.io/en/latest/notebooks/Meshes.html

It should also be possible to use petsc preconditioners inside a blockmatrix like you want to do now with hypre

Best
Christopher

1 Like

I guess the test_pc_auxiliary_mcs in https://github.com/NGSolve/ngsPETSc/blob/main/tests/test_pc.py might be what you are looking for, there I’m using PETSc GAMG but you can easily use Hypre :slight_smile:

This looks good! Apparently this PETScPC preconditioner is not loaded by default, and there are no instructions anywhere on how to load ngsPETSc. Are these docs still working: ngsPETSc — ngsPETSc 0.0.1 documentation?

I think this are the most recent installation instructions : https://github.com/NGSolve/ngsPETSc/blob/main/docs/src/ngsPETSc.rst

but I think they are exactly the same as in your link

Basically from ngsolve side it is just a python wrapper. Most of the instructions is on how to build petsc and ngsolve with mpi (which you obviously already have)

The installation instructions should still be working, and are the one you linked.
We test them bimonthly (yesterday was the last test) in our CI (here is the Dokcere image we produce: Docker).
For the documentation on how to use PETSc PC, you can find all the details here: PETSc KSP and PETSc PC — ngsPETSc 0.0.1 documentation
We still need to document how to create auxiliary space preconditioners with Hypre, BDDC, MUMPS, etc etc (the examples are here: https://github.com/NGSolve/ngsPETSc/blob/main/tests/test_pc.py).
Lastly, the most novel function that is not documented is the fact that we are wrapping all non-linear solvers from PETSc also (here are the examples: https://github.com/NGSolve/ngsPETSc/blob/main/tests/test_snes.py).

Thank you both for the directives. I think this mostly solves my issues, so this thread can be closed (can I do it?).

The ideal way would be to wrap PETSc Fieldsplit, but it would require a finer dof mapping through DM. Creating a sub-petsc block should suffice.

Best,
NB