Every two years the IMA organize a conference on the interface between numerical linear algebra and optimization. For me, this was the perfect place to organize my first minisymposium entitled ‘Modern Directions for Matrix Analysis and Applications‘ with Natasa Strabic. We managed to get some great speakers talking about their ideas for future research. I’ve summarised some of their main ideas here and you can find my presentation on SlideShare.
Left to Right: Me, Amal Khabou, Ben Jeuris, Federico Poloni, Natasa Strabic, Roel Van Beeumen. Photo: Mario Berljafa.
Natasa Strabic (Manchester)
Repairing the Indefiniteness of a Correlation Matrix with a Fixed Block
In many applications involving statistical modelling a correlation matrix is built from a dataset but, as a result of the data being incomplete or inconsistent the ‘correlation matrix’ is often not positive definite. This indefiniteness consequently renders many statistical techniques invalid, as they require a positive definite correlation matrix.
Whilst there are already a number of proposed algorithms to compute the nearest correlation matrix (see Nick Higham’s blog post) they change every element of the matrix. Sometimes there is a positive semidefinite submatrix which one would like to preserve whilst changing the other elements in an optimal way.
Here is Natasa’s home page.
Federico Poloni (Pisa)
Permuted Graph Bases for Structured Subspaces and Pencils
Federico explained the benefits of using permuted graph bases to represent a subspace. Whilst being slightly less well-conditioned than an orthogonal representation the bases preserve structure in the matrix, leading to structured algorithms for a range of optimal control problems including the algebraic Riccati equation.
Roel Van Beeumen (KU Leuven)
Compact Rational Krylov Methods for Solving Nonlinear Eigenvalue Problems
Roel presented a framework for ‘Compact Rational Krylov’ methods for nonlinear eigenvalue problems. Approximating the nonlinear function with a rational matrix function he showed that the eventual linearization can be stored extremely compactly by exploiting structure and using the compact Arnoldi decomposition. When combined with a restarting procedure the resulting algorithm gives extremely low memory costs, allowing much larger problems to be tackled with this type of method.
Amal Khabou (Manchester)
Fast Generation of Random Orthogonal Matrices
Amal talked about new methods to generate random orthogonal matrices quickly. Such matrices are needed to generate random test problems, to generate data sets with a given mean and variance, and for privacy by preserving features of the data whilst hiding the identity of participants with ‘Random Orthogonal Matrix Masking’ techniques.
Two possibilities were discussed, first algorithms that produce matrices which are exactly Haar distributed (the canonical probability distribution of orthogonal matrices), followed by a family of algorithms which are much faster but only return approximately Haar distributed matrices.
Ben Jeuris (KU Leuven)
Optimization on Structed Matrix Manifolds: Toeplitz and TBBT Matrices
Ben gave a very interesting talk about generalizing the barycentric mean to matrix manifolds. In many applications we need to ‘average’ a set of measurements stored as matrices and these matrices often possess some structure. Ben proposed using distance measures for which the barycentric mean has the same structure as the input matrices, focusing on Toeplitz structure.
He also explained how a number of optimization techniques like steepest descent or CG can be generalized into a manifold setting.
Other Interesting Talks
There were lots of other interesting talks throughout the three days, too many to list them all. Some of my favourites that are worth looking up were:
- Michele Benzi (Emory) – Updating and Downdating Techniques for Networks
- Joerg Liesen (TU Berlin) – Theory and Applications of Nondiscrete Induction
- Nancy Nichols (Reading) – Conditioning of Optimal State Estimation Problems