Compressed fes vs. Full fes - Time consumption for computation


I’m applying a MOR-method to reduce the number of elements to be evaluated for the non-linear term.
In the reference solver I use a.Apply(uFull,r) and a.AssembleLinearization(uFull).
After selecting my needed elements (and weighting them) I generate a fes as Compressed fes with only a few dofs. fesMini = Compress(fes, active_dofs=dofs)
In the reduced solver I use aMini.Apply(uMini,rMini) and aMini.AssembleLinearization(uMini).

The problem now seems that the Mini Computations take the same amount of time (or even a bit more) than the Full computations.

Is there any chance to completely get rid of the unneeded dofs?

Thanks in advance!


Ok, this is how it is implemented in NGSolve at the moment:

Even if you use Compress and only a few elements are “seen” by the remainder “active” dofs, the assembly will act as usual:
[li]it runs over all elements, takes the local finite element, [/li]
[li]computes the element matrix/vector and [/li]
[li]then writes these contributions into the corresponding location of the global matrix/vector.[/li]
Only in the last step the compression of Compress has an effect as only at this step the corresponding global dofs may be empty. In other words: The Compress only changes the local2global map, but not the local finite element. Hence, Compress does not prevent the assembly routine from computing element contributions even if they do not contribute later on.

If you want to deactivate computing local element matrices/vectors on certain/most elements, you need to communicate this to the integrators. In order to do that you can use the argument definedonelements=ba for your integrators or DifferentialSymbols. Here ba is a BitArray marking all the active elements (not dofs). Thereby your computation should gain the speed up that you are looking for.

An alternative approach that would make sense, if you have only a small number of element contributios to compute, could also be to compute element matrices / vectors directly from python for selected elements.


Thanks for the fast and clarifying answer.
I’m not sure if I understood the approach with the definedonelements correctly. Where do I have to had this BitArray exactly?
As I understood, I would add it to the SymbolicLFI and SymbolicBFI but still use the Compressed mode to easily gain GridFunctions etc… in the subset of dofs?

The alternative approach with python is not really feasible as the number of elements is still too high to do it “manually”.

Yes, if you are still using SymbolicBFI for example you can do:

 a += SymbolicBFI(c*grad(u)*grad(v),definedonelements=ba) 

And also yes: You can still do Compress to only get a matrix for the subset of dofs.


Just implemented everything. Works great! Now, the performance boost using the MOR-method is here :slight_smile: