Community White Paper (CWP) Workshop, San Diego, 23-26 January 2017

Realizing the physics programs of the planned and/or upgraded HEP experiments over the next 10 years will require the HEP community to address a number of challenges in the area of software and computing. It is expected that the computing models will need to evolve and a significant “software upgrade” will also be needed. An initiative has been launched in order to identify and prioritize the software R&D investments that will be required to meet these challenges. The aim is to produce a Community White paper (CWP) describing the community strategy that includes a roadmap for software and computing R&D in HEP for the 2020s.

The HSF is engaging the HEP community to produce the CWP via a “community planning process”, which has been initiated in the context of the HL-LHC project but which is also aiming for a broader participation that so far includes the neutrino programme, Belle II, and ILC. A workshop took place 23-26 January at the San Diego Supercomputing Center (SDSC) in order to launch the process of delivering the CWP. There were 118 registered participants representing a wide range of labs and universities based in the USA and in Europe. Whilst most delegates are working on the LHC programme, it was good to see a healthy participation by colleagues working at FNAL-IF, ILC, IHEP-Beijing, and JLab. In addition there were a number of computer scientists attending, including some from industrial partners, such as NVIDIA, Intel, IBM, Google and Altair. The full agenda for the meeting can be found here.

Workshop Group Photo

Group photo of workshop participants on steps of San Diego Supercomputing Centre

The first day of the meeting was devoted to plenary talks that summarised the challenges from the perspective of representatives of the different experimental programmes and several funding agencies. In his welcome address Peter Elmer (Princeton) reminded delegates of the plans and timeline for upgrading the LHC machine and experiments with the aim of running at 5-7 x nominal luminosity (HL-LHC). The HL-LHC will integrate 100 times the current data, with significantly increased data (pileup) and detector complexity. The resulting computing needs will outpace the expected improvements in computer performance (Moore’s Law) by factors of between 3 and 30. Technology change will also make it challenging to exploit Moore’s Law without software evolution. In addition, there are software sustainability challenges associated with the further development and maintenance of software that is already 15-20 years old. The CWP will identify and prioritise the software research and development investments required:

  • to achieve improvements in software efficiency, scalability and performance and to make use of the advances in CPU, storage and network technologies
  • to enable new approaches to computing and software that could radically extend the physics reach of the detectors
  • to ensure the long term sustainability of the software through the lifetime of the HL-LHC

Mark Neubauer (Illinois) stressed that the primary motivation for each of the LHC upgrades is to maximize physics performance and emphasised the need to address the full mix of physics activities in the planning (higher luminosity, energy increase etc.). He also re-iterated the need to develop synergies between LHC experiments as much as possible. Ian Bird (CERN) provided the latest estimates of resource needs for the coming years, concluding that Run 2 and Run 3 can probably be managed with an evolutionary approach, but that HL-LHC will require more revolutionary thinking. Computing requirements will be at least a factor 10 higher than what can be realistically expected from technology evolution assuming a constant budget. There are many different challenges that need to be addressed, technical, sociological as well as funding. A wide range of topics must be studied, including improvement of software performance, re-thinking of computing models to include integration af all available resources (HPC, Cloud, opportunistic, traditional), exploring the boundary conditions for funding the national infrastructures etc. The LHCC (LHC Scientific review), SPC (CERN Science Policy Committee), and RRB (funding agencies) are all wanting to see progress towards understanding the costs of computing for HL-LHC. The hope is that the CWP will provide essential input for the documents that have to be submitted to the LHCC, namely:

  • in 2020, a Technical Design Report (TDR) for HL-LHC software and computing and
  • in 2017, a Conceptual Design Report (CDR) that describes a roadmap for producing the TDR.

The challenges for the Fermilab Neutrino and Muon Programs were described by Rob Kutschke (FNAL). The Fermilab neutrino and muon programs have successfully used common infrastructure and tools and he fully expects that future challenges will be met by adding value through collaboration e.g. by increasing the quality and effectiveness of algorithm code, by providing clean integration between products, by supporting new initiatives that build on existing capabilities thereby realising reductions in total effort across experiments. Frank Gaede (DESY) also explained how the ILC community have focused on developing generic software tools that can be used by any HEP experiment, in particular through their contribution to the EU AIDA projects. He showed examples in several software domains, including event data (EDM), geometry (DD4HEP), as well as in the development of advanced tracking tools and particle flow algorithms. Wenjing Wu (IHEP) made a survey of software and computing activities in support of the HEP experimental programme in China. A good support infrastructure exits for development of software frameworks and tools and the chinese community is keen to collaborate with other HEP teams on the parallelisation and optimisation of frameworks and algorithms. Plans are also advanced for building the next generation circular electron positron collider (Higgs/Z factory), which is currently in the R&D and engineering design phase (2016-2020). Construction is planned for 2021-2027 with data-taking starting in 2028.

There were several presentations by representatives of funding agencies who recognised the need for investments to exploit the next generation hardware and computing models. There is a strong desire to strengthen the global cooperation amongst laboratories and universities to address the challenges faced and to provide the requisite training. In the US this has led to the setting up of the HEP Center for Computational Excellence, the [HEP-CCE] (http://hepfce.org/) and in Europe the HSF is now seen as a well-recognised body and a partner to work with. In several countries it was mentioned that there is an increased chance of funding if HEP solutions can be used by other scientific disciplines. Understanding the resource requirements for the operation and analysis of each experiment is considered crucial for developing an optimal plan up to 2026 and beyond.

The views of CERN and Fermilab on the expansion of compute capacity were given by Helge Meinhard and Oliver Gutsche respectively. At CERN a study has been made to setup a new Data Centre on the Prevessin site based on the GreenITCube recently built at GSI, Darmstadt. A positive recommendation has now been made on the technical fesibility although there are still concerns on the networking costs. The long-term strategy is to keep the available data centres fully busy, but also to exploit cloud and commercial resources as required within a specific cost envelope. At FNAL commercial clouds are seen as offering increased value for decreased cost and the flexibility to provide capacity as and when it is really needed. The HEPCloud project is envisioned as a portal to an ecosystem of diverse computing resources, both commercial and academic, and a pilot project is underway to explore its feasibility and capabilities with the goal of moving into production during FY2018.

The plenary session finished with several presentations on specific topics. These included a nice update on technology tracking by Helge Meinhard and an overview on managing Copyright and licensing issues in open-source projects by Aaron Sauers (FNAL). Dan Katz (NCSA) described the principles that motivate the provision of tools for managing the citation of software in order to put it on the same basis as other scientific works such as scientific papers and books. Anyone interested in contributing to the ongoing work is invited to join the Software Citation Implementation Group [here] (https://www.force11.org/group/software-citation-working-group).

The CWP Working Groups

The following 2 days were devoted to holding meetings of the various Working Groups and these took place in parallel. Topics covered included simulation, data analysis, event processing frameworks, workflow and dataflow management, triggering, machine learning, data management, visualisation, support for software development and data preservation. The groups were charged with identifying the challenges in each domain and the preparation of a plan for delivering the contribution to the CWP document. The last day of the workshop was plenary when summary talks were given by the conveners of each WG. All WG documents are public and plans have already been put in place by most WGs for allocating tasks and for holding follow-up meetings. In addition, an open HEP software community workshop is being organised by the HSF to examine the analysis ecosystem aimed at building consensus among developers, users, projects and their supporters in the HEP analysis ecosystem. Participation in all these activities is open to all those who are interested. Each WG has a mailing list and you are encouraged to register in all those in which you are interested. Please see information linked to the workshop indico site for full details.

Next steps

The work to deliver the CWP should happen over the next 5 months after which it is planned to have a final workshop early in the summer. Current thinking is focusing on holding this workshop during the last week of June at a location close to, but not at, CERN. Each WG is requested to deliver its contribution at this time. A number of community events will take place during the interim period and will give opportunities for WGs to co-locate their meetings so as to meet face-to-face. These include:

Finally, please be sure to register to the Community White Paper google group and the general HSF forum if you have not already done so.