Conveners
Invited talks: Krylov subspaces
- Marcel Schweitzer (University of Wuppertal)
Invited talks: Network science
- Thomas Mach (Universität Potsdam)
Invited talks: Exponential integration
- Stéphane Gaudreault (Environment and Climate Change Canada)
Invited talks: f(A)scinating applications
- Patrick Kürschner (HTWK Leipzig)
Invited talks: Graphs and networks
- Marcel Schweitzer (University of Wuppertal)
When efficient linear solvers for shifted systems are unavailable, polynomial Krylov subspace methods are often the only methods of choice to compute $f(A)b$, the action of a matrix function on a vector. For less well conditioned problems the number of required Arnoldi steps may then become so large that storing the Arnoldi vectors exceeds the available memory and that orthogonalization costs...
Though seemingly they belong to two different worlds, matrix functions and network science have some degree of overlap thanks to a very simple fact; powers of the adjacency matrix count traversals in the underlying network. This concept in turn allows for the definition of centrality measures in terms of entries (or sums thereof) of functions of the adjacency matrix.
In this talk, after...
Exponential integration has taken a more prominent role in scientific computing over the past decades. Exponential schemes offer computational savings for many problems involving large stiff systems of differential equations. Careful design of a practical exponential scheme is crucial, however, to ensure that the resulting method is efficient for a particular equation. In particular, to...
In this talk, we consider two efficient ways to approximate actions of $\varphi$-functions for matrices $A$ with a $d$-dimensional Kronecker sum structure, that is $A=A_d\oplus\cdots\oplus A_1$. The first one is based on the approximation of the integral representation of the $\varphi$-functions by Gaussian quadrature formulas combined with a scaling and squaring technique. The resulting...
Tensors are multidimensional arrays that can play a key role in the representation of big data. Decompositions of higher-order tensors have applications in biochemistry, signal processing, data mining, neuroscience, and elsewhere. Building on earlier decompositions (such as canonical/parallel factor (CANDECOMP/PARAFAC), Tucker or its variants), recent research efforts have been devoted to the...
It is well known that the Lanczos tridiagonal matrix can be transformed to an equivalent finite-difference scheme, with the coefficients obtained from the Stieltjes continued fraction. We show the usefulness of such representation on two seemingly unrelated problems.
The first one is "rigorous" computation of the exponential matrix moments $b^*\exp{-tA}b$. Here we use the finite-difference...
Numerical methods for evaluating a function $f$ at an $n \times n$ matrix $A$ can be based on a variety of different approaches, but for a large class of algorithms the matrix $f(A)$ is approximated by using only three operations:
- $Z \gets c_X X + c_Y Y,$ linear combination of matrices),
- $Z \gets X \cdot Y,$ (matrix multiplication),
- $Z \gets X^{-1} Y,$ (solution of a linear...
The problem of approximating the von Neumann entropy of a symmetric positive semidefinite matrix $A$, defined as ${\text tr} (f(A))$ where $f(x) = - x \log x$, is considered. After discussing some useful properties of this matrix function, approximation methods based on randomized trace estimation and probing techniques used in conjunction with polynomial and rational Krylov methods will be...