Last week Nick Higham, Edvin Deadman, and I ran a minisymposium on matrix functions at the SIAM Applied Linear Algebra 2015 conference (link). This post gives a brief summary of each talk, links to published work, and (once they appear) links to the slides with synchronised audio.
Edit: Links to the talks are now available.
Attendance at the sessions was very good, with some high-quality questions coming from the audience.
The symposium had two sessions.
- Marcel Schweitzer – Error Estimation in Krylov Subspace Methods for Matrix Functions
- Michele Benzi – Functions of Matrices with Kronecker Sum Structure
- Bruno Iannazzo – First-Order Riemannian Optimization Techniques for the Karcher Mean
- Sivan Toledo – A High Performance Algorithm for the Matrix Sign Function
- Peter Kandolf – The Leja Method: Backward Error Analysis and Implementation
- Massimiliano Fasi – An Algorithm for the Lambert W Function on Matrices
- Antii Koskela – An Exponential Integrator for Polynomially Perturbed Linear ODEs
- Edvin Deadman – Estimating the condition number of f(A)b
Peter Kandolf describing the famour “hump” in the matrix exponential.
Marcel Schweitzer – Error Estimation in Krylov Subspace Methods for Matrix Functions
The first talk of our session was about error estimation when using the Lanczos method to approximate f(A)b. Marcel explained the derivation of a new representation of the error function, as an integral, which can be computed cheaply. This assumes that the matrix funciton is a Stieltjes functions, which is still a large class of functions including the logarithm and inverse square root.
Michele Benzi – Functions of Matrices with Kronecker Sum Structure
The next talk by Michele Benzi focused on functions of large, sparse matrices with Kronecker sum structure which arise in a number of applications. There were two main areas explored in the talk. Firstly, new bounds were given on the decay behaviour of such matrix functions. The decay (in magnitude) of the elements of the matrix exponential away from the diagonal is a well-known phenomenon which has been extended nicely for matrices of this form. Secondly, some ideas on how to form the matrix function times a vector using Krylov subspace methods was discussed.
Bruno Iannazzo – First-Order Riemannian Optimization Techniques for the Karcher Mean
The Karcher mean of matrices is a generalization of the geometric mean. It is applicable to positive definite matrices and has a nice geometric characterization, similar to the geometric mean. The talk then explores optimization algorithms, exploiting the geometric structure of the problem, to compute the Karcher mean very efficiently.
Sivan Toledo – A High Performance Algorithm for the Matrix Sign Function
The final talk of the first session discussed the implementation of a high performance routine for computing the matrix sign function, based upon Nick Higham’s modified Schur-Parlett method. The resulting method is much more efficient than current methods.
Nice fractal patterns in Massimiliano Fasi’s talk.
Peter Kandolf – The Leja Method: Backward Error Analysis and Implementation
The second session began with an new backward error analysis for computing exp(A)b with the Leja method. The ideas were reminiscent of the work by Al-Mohy and Higham with some novel twists. The new method is highly efficient and is well-worth looking into.
Massimiliano Fasi – An Algorithm for the Lambert W Function on Matrices
The next talk focused on the, rather unusual, Lambert W function. Due to the branch cuts etc. that the function contains it is very nontrivial to compute in practice. The talk described an algorithm which is a highly modified Schur-Parlett method, where a Newton iteration is used on each diagonal block.
Antii Koskela – An Exponential Integrator for Polynomially Perturbed Linear ODEs
The penultimate talk of our session was on exponential integrators. In particular Antii described a way to solve linear ODES with small polynomial perturbations in an extremely efficient manner using Krylov subspace methods. It is especially effective when evaluating the ODE mutliple times with different perturbations.
Edvin Deadman – Estimating the condition number of f(A)b
Finally Edvin rounded up our session by speaking about how me might estimate the condition number of f(A)b for large, sparse matrices. This is currently a big problem when using exponential integrators and in other applications where f(A)b is required: we currently have no way to determine whether the problem is ill-conditioned. The derivation of an algorithm for estimating this condition number was discussed, which would benefit hugely from further research into computing the Frechet derivative times a vector more efficiently.