Speaker
Description
The Discrete Empirical Interpolation Method (DEIM) is an important tool in model reduction, significantly reducing the cost of evaluating a high-dimensional function if the function value is known to be nearly contained in a low-dimensional subspace. On a matrix level, DEIM requires one to select a well-conditioned square submatrix $U(I,:)$ of an orthonormal basis $U \in \mathbb R^{n\times k}$. Commonly used approaches to selecting the row index set $I$ proceed in a greedy manner. For example, Q-DEIM by Drmac and Gugercin first selects the row of maximum norm, then removes this selected row by orthogonal projection and updates $U$ before selecting the next row of maximum norm, etc. While such greedy strategies often work well, there are cases for which $\|U(I,:)^{-1}\|_2$ grows exponentially with $k$. Randomization provides a simple cure: Instead of choosing the row of maximum norm, one selects the row index randomly with probability proportional to the row norm. The selected row is removed by orthogonal projection and, in turn, the probability distribution for selecting the next row is adapted. The index set $I$ resulting from this adaptive randomized selection satisfies $\|U(I,:)^{-1}\|_2 \le \sqrt{1+k(n-k)}$ in expectation, which matches the bound obtained from solving the NP complete problem of finding the submatrix of maximum volume. The derandomization of our adaptive randomized DEIM strategy results in a deterministic algorithm recently proposed by Osinsky, which is slightly more expensive but satisfies the same bound deterministically. The talk will also consider various applications and extensions, including symplectic model reduction and Lagrangian graph bases.