FES of products of basis elements

Hey! I was wondering if there is a canonical way to create a finite element space whose basis elements are all products of basis elements of another fes. I.e. if one fes has basis {p_i}_{i=1}^n, then I need a fes whose basis elements are {p_ip_j}_{i,j=1}^n, obviously with efficiently eliminating all products that are zero, which would be most of them. Is there a straightforward way to implement this?

(Use case: Covariances and diagonal operators, which generally leads to products of basis elements.)

defining a space like that: NO

But maybe, which operations do you need ?

I see. Is there a fundamental reason why defining such a space is out of the question? Since p_ip_j=0 for most i,j, and the product basis still represents a polynomial basis of at most twice the original order, whose supports are smaller than or equal to any support in the original space, I wouldn’t think it’s too terrible an object, apart from having to remove new linear dependencies.

The key relation I need is the computation of diag of a PDE operator. Generally speaking, if PDE solution operators are seen as integral operators with integral kernel k(x,y) (which would be a very big object to recover, i.e. the dense version of the inverse matrix), I only need to obtain k(x,x), which for most sufficiently smoothing PDEs will exist and be nice. In fact, by Mercer’s theorem, it could be represented as a weighted sum of squares of eigenfunctions - which could be nicely represented in the product basis, without interpolation.

(Generally speaking, this is not going to be equivalent to the diagonal of the inverse matrix due to non-orthogonality of the FEM, with the exception of 0th-order spaces, where everything resolves nicely anyway.)

Sorry, don’t see how this should work. What about Hutchinson’s method ?

from Google AI:

Hutchinson’s method, or its variants,

can be used to estimate the diagonal of a matrix inverse by using element-wise multiplication of random vectors with the matrix-vector product of the matrix inverse and the vector, and then performing element-wise averaging. This method allows for the estimation of diagonal elements without explicitly forming the entire inverse matrix, which is useful for large, sparse matrices common in applications like machine learning and computational science.

The Concept

  1. Goal: To estimate the vector diag(A⁻¹).
  2. The Method: It involves taking a sequence of random vectors, vⱼ.
  3. Calculation: For each vector vⱼ, you compute wⱼ = A⁻¹vⱼ.
  4. Estimator: The diagonal elements of A⁻¹ are estimated by averaging the Hadamard products (element-wise products) of the random vectors vⱼ and the results wⱼ, and then normalizing.

Thank you for the suggestion. However, the emphasis is less on estimation, and rather on having a natural setting for such diag(.) objects to live in for the purpose of inverse problems. This relates to covariance operators in the sense that if f is a function-valued Gaussian random variable living e.g. in H^1 (i.e. a fes), with

f~N(0,M_q)

where M_q is the multiplication operator with a function q, this acting as the covariance operator of f, then the “product fes” that I mentioned above would be the natural space for q to live in and the natural setting for regularised inversion. The diag operator would appear in the backpropagation and would then map back into the “product fes”.

As mentioned, if fes is a 0th-order L2 fes, then its product fes it itself, and in this setting I can already do everything I want. But it would be very nice to have a natural way to do this for smoother feses, without interpolation/approximation.