I am trying to develop an optimal parallel solver for stokes, for which I am using as a basis the block formulation from the documentation. I have modified the original code to use the following block preconditioner (in the stabilized formulation, P1xP1):
This works, and iteration counts are indeed optimal. Still, when I increase the number of processors (from 2 to 16), execution times don’t change at all. I suspect that bddc does not support parallel MPI execution, so I tested HYPRE, which works fine on the poisson problem. My issue is that if I try to do the same here with
Amg = Preconditioner(a, "hypre")
then the code freezes and never goes beyond showing the parameters of HYPRE. Any idea of what could be happening would be great.
from ngsolve import *
from netgen.occ import *
from mpi4py import MPI
comm = MPI.COMM_WORLD
# Geometry
shape = Rectangle(2,0.41).Circle(0.2,0.2,0.05).Reverse().Face()
shape.edges.name="wall"
shape.edges.Min(X).name="inlet"
shape.edges.Max(X).name="outlet"
geo = OCCGeometry(shape, dim=2)
if comm.rank == 0:
mesh = Mesh(geo.GenerateMesh(maxh=0.05).Distribute(comm))
else:
mesh = Mesh(netgen.meshing.Mesh.Receive(comm))
V = VectorH1(mesh, order=2, dirichlet="wall|inlet|cyl")
Q = H1(mesh, order=1)
X = V * Q
u,v = V.TnT()
p,q = Q.TnT()
a = BilinearForm(InnerProduct(Grad(u),Grad(v))*dx, symmetric=True)
b = BilinearForm(div(u)*q*dx).Assemble()
h = specialcf.mesh_size
c = BilinearForm(-0.1*h*h*grad(p)*grad(q)*dx, symmetric=True).Assemble()
mp = BilinearForm(p*q*dx, symmetric=True)
f = LinearForm(V).Assemble()
g = LinearForm(Q).Assemble();
gfu = GridFunction(V, name="u")
gfp = GridFunction(Q, name="p")
uin = CoefficientFunction( (1.5*4*y*(0.41-y)/(0.41*0.41), 0) )
gfu.Set(uin, definedon=mesh.Boundaries("inlet"))
Qjacobi = Preconditioner(mp, "local")
Amg = Preconditioner(a, "bddc")
a.Assemble()
mp.Assemble()
f.Assemble()
g.Assemble()
K = BlockMatrix( [ [a.mat, b.mat.T], [b.mat, c.mat] ] )
C = BlockMatrix( [ [Amg.mat, None], [None, Qjacobi.mat] ] )
rhs = BlockVector ( [f.vec, g.vec] )
sol = BlockVector( [gfu.vec, gfp.vec] )
solvers.MinRes (mat=K, pre=C, rhs=rhs, sol=sol, printrates=comm.rank==0, initialize=False, maxsteps=500);
This code does not work if I change “bddc” to “hypre”. I tried using hypre with a poisson example and it runs ok (only in parallel, n>1), but using it block-wise gives this error.
Ah, I don’t think this hypre binding is only working for scalar H1 space not vector. i think a better way here is to wrap the matrix to petsc and use the preconditioners there (there is also hypre available through petsc)
This looks good! Apparently this PETScPC preconditioner is not loaded by default, and there are no instructions anywhere on how to load ngsPETSc. Are these docs still working: ngsPETSc — ngsPETSc 0.0.1 documentation?
but I think they are exactly the same as in your link
Basically from ngsolve side it is just a python wrapper. Most of the instructions is on how to build petsc and ngsolve with mpi (which you obviously already have)