The Large Hadron Collider smashes protons into each other at the highest energies humanity has ever engineered. Protons are a very convenient type of particle for our high-energy beams : they are plentiful, and they don’t lose (lots of) energy like electrons do when accelerated around the LHC ring. But they are not fundamental particles: they are made up of a tightly bound collection of smaller particles, and to make the most out of LHC experiments we need to understand both what we do and don’t know about the internal structure of the proton that these objects induce. We do this through so-called parton density functions, or PDFs.
The LHAPDF C++ library is the LHC’s standard system for supplying PDF data to both experiments and theory calculations. Several years ago it was rewritten from scratch to make it more flexible and maintainable, but in the upcoming high-luminosity era of the LHC where vast numbers of events need to be processed, some of these designs are in need of updates both for speed and precision.
In this project, you will work on profiling and re-designing aspects of LHAPDF to make it much faster, adding caching, vectorisation, and GPU compatibility. This work has the potential to save huge amounts of CPU time for LHC experiments, and to maximise the value that can be extracted from physics data in the next LHC runs.
The following tasks are envisaged:
A new LHAPDF prototype, design to improve performance via caching and exploratory accelerator technologies.