I have a mesh with over a thousand domains and need to create an H1 space for each. Each space only uses a small sub-mesh, but I run out of memory during allocation, especially with MPI.
What in the allocation of a finite element space consumes significant memory? Any suggestions for handling many such localized spaces more efficiently?
in this example most of the dofs are actually used in the space. Ah and your problem is the size of the space itself, not objects (bilinearform, gfs,…) built from it? Then compress doesn’t help you, yes. What are you using the spaces for? and why do you need so many of them?
If you want to solve problems on different subparts you can just build one space and solve using setting freedofs on subdomains:
I used the spaces to implement the Interface resisitivity with the Robin-like term on the interface
a += alpha * (ui-uo) * (vi-vo) * ds(“interface”)
And I have more than 2000 interfaces (“interface_1_2”, “interface_3_4” and so on).
Later, I also want to compute the solution difference at the interface as well. For example:
if you have thousands of domains, you may try a graph colouring. Then you can use one H1 space per colour, defined on the union of subdomains of that colour.