Last week Nick Higham, Edvin Deadman, and I ran a minisymposium on matrix functions at the SIAM Applied Linear Algebra 2015 conference (link). This post gives a brief summary of each talk, links to published work, and (once they appear) links to the slides with synchronised audio.
Edit: Links to the talks are now available.
Attendance at the sessions was very good, with some high-quality questions coming from the audience.
The symposium had two sessions.
- Marcel Schweitzer – Error Estimation in Krylov Subspace Methods for Matrix Functions
- Michele Benzi – Functions of Matrices with Kronecker Sum Structure
- Bruno Iannazzo – First-Order Riemannian Optimization Techniques for the Karcher Mean
- Sivan Toledo – A High Performance Algorithm for the Matrix Sign Function
- Peter Kandolf – The Leja Method: Backward Error Analysis and Implementation
- Massimiliano Fasi – An Algorithm for the Lambert W Function on Matrices
- Antii Koskela – An Exponential Integrator for Polynomially Perturbed Linear ODEs
- Edvin Deadman – Estimating the condition number of f(A)b
Peter Kandolf describing the famour “hump” in the matrix exponential.
Continue reading “SIAM ALA 15 – Minisymposium on Matrix Functions”
Recently I’ve been working with some of the statistics staff at the University of Manchester on sports analytics. Specifically we’ve been looking for useful models in football data. People from this background normally use R to analyze data and fit models.
Normally I would use Python for this kind of task but, since there was already a considerable amount of code in R, it made sense for me to do some work in R. The people at Continuum Analytics (who make the brilliant Anaconda Python distribution) recently announced support for R using their package manager conda. However, it wasn’t easy to find instructions to get a fully working environment, so here is what I did.
Continue reading “How to setup R using conda”
I’ve finally finished! After years of reading papers, designing algorithms, hacking at code, and writing papers, my PhD is complete.
One of the most daunting thoughts I had as a PhD student was the idea of the viva: two experts sit in a room and pick apart the fine details of your work. They ask deep and technical questions, not limited merely to your thesis content, for a few hours (I’ve heard horror stories of 8 hours!) before sending you out of the room to discuss your fate. Fifteen minutes of palpitations later you get your result and (whatever the outcome) head to the pub, either to celebrate or drown your sorrows as appropriate.
In reality, because I was well prepared, my viva was actually just a chat with some knowledgeable people who were very interested in my work. There were a few curveball questions, nothing too serious, and the whole thing was done in an hour.
Here are some of my top tips for viva preparation.
The finished product!
Continue reading “How To Prepare For Your Viva”
For the past three years I’ve been doing my PhD in applied maths at Manchester. Now that I’m almost ready to submit my thesis I thought I’d write up some tips for those who are just beginning their PhD journey.
Continue reading “Ten Things I Learnt During My (PhD) Thesis”
There are lots of new features in SciPy 0.13 (release notes) but for me the most important are the updated matrix functions in scipy.linalg and the one norm estimator in scipy.sparse.linalg.
In some of my recent research (related to section 4 of this) I’ve needed to estimate the one norm of a large (n^4 x n^2) dense matrix without computing each element. All we can assume is the ability to compute matrix-vector products (via some rather complicated function), meaning we only know the entries of the matrix implicitly.
Continue reading “Using implicit matrices in Python”