I really need to get around to writing on here more often…
Anyway, the NVIDIA GPU Grant Scheme has re-opened after a little break over Christmas and is now giving away Titan V (Volta) GPUs to academics. I was lucky enough to bag one of these earlier this week, so I’m eagerly awaiting the delivery! All that’s required is a 2000 word explanation of why it would be helpful to your research, so it’s well worth your time.
The Titan V is a significant upgrade to their previous offering in that the Titan V has 640 “Tensor cores” in addition to the usual CUDA cores. The tensor cores are dedicated to performing matrix multiplies using half precision arithmetic (though the input and output matrices can be single precision). There is a more detailed explanation of their use here. Using half precision on dedicated hardware components means a dramatic increase in speed for these operations, which are at the heart of neural network training.
The downside to using half precision arithmetic is the potential loss of accuracy in the result, depending upon the accumulation of rounding errors. Whilst this isn’t a huge problem for small matrix multiplications, using half precision for other matrix problems such as inversion will – essentially – multiply this error by the condition number of the matrix. At this point we can quickly run into problems! Nick Higham recently touched upon these topics in this blog post.
Over the last year, there has been significant interest in solving many small linear algebra problems simultaneously. Library vendors such as MKL and NVIDIA, along with researchers at instutions including Manchester, Tennessee, and Sandia National Labs have all been attempting to perform these calculations as efficiently as possible.
Over the weekend prior to the SIAM CSE17 meeting, many of those researchers (including myself) held a workshop to discuss strategies for batched BLAS (basic linear algebra subprogram) computations. Furthermore, lots of discussion was aimed at standardising the function APIs and the memory layout that users will interact with. The slides, and a number of research papers on the topic, are available at this page.
At the SIAM CSE17 meeting, our team at Manchester organised a minisymposium to discuss the highlights of our weekend with a wider audience. A brief summary of the four talks, along with a copy of their slides, is given below.
Continue reading “Batched BLAS Operations at SIAM CSE17” →
Recently I’ve been working with some of the statistics staff at the University of Manchester on sports analytics. Specifically we’ve been looking for useful models in football data. People from this background normally use R to analyze data and fit models.
Normally I would use Python for this kind of task but, since there was already a considerable amount of code in R, it made sense for me to do some work in R. The people at Continuum Analytics (who make the brilliant Anaconda Python distribution) recently announced support for R using their package manager conda. However, it wasn’t easy to find instructions to get a fully working environment, so here is what I did.
Continue reading “How to setup R using conda” →
I’ve finally finished! After years of reading papers, designing algorithms, hacking at code, and writing papers, my PhD is complete.
One of the most daunting thoughts I had as a PhD student was the idea of the viva: two experts sit in a room and pick apart the fine details of your work. They ask deep and technical questions, not limited merely to your thesis content, for a few hours (I’ve heard horror stories of 8 hours!) before sending you out of the room to discuss your fate. Fifteen minutes of palpitations later you get your result and (whatever the outcome) head to the pub, either to celebrate or drown your sorrows as appropriate.
In reality, because I was well prepared, my viva was actually just a chat with some knowledgeable people who were very interested in my work. There were a few curveball questions, nothing too serious, and the whole thing was done in an hour.
Here are some of my top tips for viva preparation.
The finished product!
Continue reading “How To Prepare For Your Viva” →
According to recent analysis by ComScore the number of mobile users will surpass the number of desktop users this year. This means it is becoming vital that your website is smartphone friendly.
I’ve recently redesigned my website to make it easy to use on desktops, tablets, and smartphones by using responsive web design (RWD): the website layout changes depending on your screen size. In this post I’m going to share a few of the tips I found helpful.
Responsive webite for the Manchester University Maths Dept. Left: Desktop. Right: Mobile.
Continue reading “5 Tips For Starting Responsive Web Design” →
For the past three years I’ve been doing my PhD in applied maths at Manchester. Now that I’m almost ready to submit my thesis I thought I’d write up some tips for those who are just beginning their PhD journey.
Continue reading “Ten Things I Learnt During My (PhD) Thesis” →
The Software Sustainability Institute, Mathworks, and the Software Carpentry group recently collaborated to run a course at Manchester University. The event was designed to teach best practices in software engineering to young researchers and mainly focused on three points:
- the command line and shell scripting (mainly in Bash).
- version control, and in particular Git.
- data manipulation, unit testing, and performance considerations in MATLAB.
In this post I’ll highlight what I took away from the course and give links to some useful information.
Continue reading “Software Carpentry – The Highlights” →
There are lots of new features in SciPy 0.13 (release notes) but for me the most important are the updated matrix functions in scipy.linalg and the one norm estimator in scipy.sparse.linalg.
In some of my recent research (related to section 4 of this) I’ve needed to estimate the one norm of a large (n^4 x n^2) dense matrix without computing each element. All we can assume is the ability to compute matrix-vector products (via some rather complicated function), meaning we only know the entries of the matrix implicitly.
Continue reading “Using implicit matrices in Python” →