Gene Golub’s Summer Schools
The late Gene Golub made an enormous contribution to numerical analysis throughout his career. After becoming a professor at Stanford he published the Matrix Computations with Charles Van Loan which is one of the great textbooks in the field (the 4th edition was recently released). Gene also served as SIAM president and was the founding editor of both the SISC and SIMAX journals.
When Gene died he left a significant sum of money behind which SIAM has used to fund summer schools for PhD students on topics relating to numerical linear algebra.
Matrix Functions and Matrix Equations
This year the summer school was held in Shanghai and focused on matrix functions and matrix equations, the former of which is the subject of my PhD research. The next few paragraphs provide a brief summary of each course.
Group photo of everyone at G2S3.
The first lecturer this year was Nicholas Higham who spoke about theoretical and computational aspects of matrix functions. This course was a great introduction to the area: from the basic definitions of a matrix function through to advanced topics such as derivatives, condition numbers and methods for evaluating functions of sparse matrices.
The second speaker we met was Marlis Horbruch who gave a series of lectures on exponential integrators. This course went from basic Runge-Kutta type methods towards Exponential-Rosenbrock methods with heavy emphasis on working with operators instead of matrices when possible.
Our next speaker was Ren-Cang Li on eigenvalue problems. Specifically we focused on steepest descent and CG methods for Hermitian and linear response eigenvalue problems. He also emphasised the importance of the min-max principles and preconditiong strategies.
Photograph of the Bund on the Huangpu River.
Then came Sherry Li with an introduction to parallel computing. We began with an overview of the different architectures found on today’s HPC machines, noting that current NUMA (Non Uniform Memory Access) architectures make communication avoidance increasingly important. We then covered sparse matrix factorizations and direct solvers while implementing some of these algorithms in OpenMP and MPI. I’d never tried parallel computing before so found these lab sessions extremely helpful. I found the tutorials at LLNL on OpenMP and MPI to be particularly useful.
Finally Peter Benner spoke about model reduction and matrix equations. The model reduction section was mainly focused on linear time-invariant models, giving numerous possible algorithms and various convergence properties etc. The matrix equations section complemented this by investigating Lyapunov and Sylvester equations, which occurred frequently within the first section.
There was also an opportunity for the students to speak about their work, of which there was an incredible variety: network analysis, hydrology, preconditioners and lots of other topics. I spoke about my recent work on higher order derivatives of matrix functions and the slides of a similar talk at SIAMAN13 are available here.