Overview of the NA-HPC Workshop at UCL, April 2014

The NA-HPC Network

The NA-HPC Network is one of the groups funded by EPSRC Network Grants tasked with supporting the interaction and collaboration between numerical analysts, computer scientists, developers, and users of HPC systems within the UK.

Run by Nick Higham and David Silvester at Manchester the network has run a number of events over its 3 year lifespan. This post contains my highlights of the recent meeting at UCL, details of which can be found here.

The Future of High Performance Computing

One major theme of the conference was identifying the key factors we need to address to make the best use of exascale machines in the coming years. Power consumption is a major concern, on average current supercomputers need around 2600 MW of power per exaflop. Scaling this up would mean an exaflop machine would need more power than is generated by the Hoover Dam. By comparison the current leader of the Top 500 List needs just 526 MW of power per exaflop (source) with peak performance of 34 petaflops.

One speaker at the conference argued that even algorithms with order n squared complexity will be too expensive (in power costs) and we should aim to design new, cheaper algorithms.

The other major challenge will be machine failures. Exascale machines will be so large that, even if the failure rate of components is improved by a factor of 10, we expect components to fail about once per hour (source). Currently the most common strategy to deal with this is checkpoint restarting (wiki) but this is becoming too expensive.

Potentially much smarter solutions can be gained by performing resilient programming, both in algorithm design and implementation. One example of this from the above Arxiv paper is the use of asynchronous parallel algorithms, which are tolerant to latency between nodes.

Programming Languages for Parallelism

The second major focus of the event was on programming languages. In particular there were talks on Fortran, C++, Python, MATLAB, and Julia. For me the two most interesting projects mentioned were OCCA (C++) and FEniCS (Python).

OCCA is a C++ library which can compile sections of code into OpenMP, OpenCL, and CUDA by simply changing a compilation variable. This unifies the three languages, making programming much easier whilst delivering almost all of the performance gained by hand-tweaking your code for each one. I was really impressed how this enabled one to write code just once and try it under a range of parallel paradigms quickly and easily. You can read more about it here.

FEniCS is essentially a domain specific language for differential equations, isolating the user from complicated computational kernels at the core of their problem. It relies on a number of high performance libraries, creating intuitive wrappers over the top and seems to be gaining a lot of traction with computational scientists. Their website is fenicsproject.org.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s