12
May 2024
16

May 2024

ISC 2024

ISC is the HPC community’s largest European conference. Held each year in the early summer in Germany, the event follows in the footsteps of the past few years by being located once again Hamburg. With approximately 5000 attendees, there are a wide range of topics being explored during the week and a busy schedule of events.

The ExCALIBUR programme is participating in numerous events at ISC, and we will have an ExCALIBUR project poster displayed throughout the conference. ExCALIBUR partners EPCC and STFC will have booths throughout the week, with ExCALIBUR information and merchandise available on both of these. So please drop by and say hello!

ExCALIBUR events at ISC

WhenWhat
Monday – WednesdayPoster: Exascale Computing Algorithms & Infrastructures Benefiting UK Research Program (details)
Monday – WednesdayPoster: Establishing the Accessible Computational Regimes for Biomolecular Simulations at Exascale (details)
Monday – WednesdayPoster: targetDART (details)
Monday – WednesdayPoster: Towards Exascale Ab Initio Materials Modelling
Tuesday 2:15 PM to 3:15 PMBoF: Democratizing AI Accelerators for HPC Applications: Challenges, Success, and Support (details)
Wednesday 10:05 AM to 11:05 AMBoF: HPC Next: The RISC-V Ecosystem (details)
Wednesday 10:05 AM to 11:05 AMBoF: Developing a Sustainable Future for HPC and RSE Skills: Training Pathways and Structures (details)
Wednesday 2:30 PM to 3:30 PMBoF: HPC and You v4.0 – a Student BoF on Enjoying a Career and Community in HPC (details)
Thursday 9:00 AM to 1:00 PMWorkshop: Fourth International Workshop on RISC-V for HPC (details)
Thursday 11:30AM to 11:50 AMPaper: Performance characterisation of the 64-core SG2042 RISC-V CPU for HPC (details)
Thursday 2:00 PM to 6:00 PMWorkshop: Third Combined Workshop on Interactive and Urgent Supercomputing (details)

Description of events

The RISC-V H&ES testbed is leading the organisation of the Fourth International workshop on RISC-V for HPC which looks to bring together the RISC-V and HPC communities. RISC-V is an open, community driven, Instruction Set Architecture (ISA) that has seen phenomenal growth since it’s inception little over a decade ago. However, it is yet to gain full acceptance in HPC, but with a range of new RISC-V high performance hardware being released and promised for 2024, this has the potential to change. The workshop is therefore an opportunity to share with the community progress being made here, to explore where RISC-V can benefit HPC and enable the HPC community to help shape the RISC-V standard. Furthermore, there will also be a BoF entitled HPC Next: The RISC-V Ecosystem that the RISC-V testbed are involved in, enabling a discussion with the HPC community around this topic. These two events were extremely successful at ISC23, so promise to be busy and popular this year too!

An area of great interest for the HPC community is the use of novel accelerators for HPC workloads. There are several companies who have developed AI/ML focussed technologies but are now opening their hardware up to accelerate more generalised computing. The Democratizing AI Accelerators for HPC Applications: Challenges, Success, and Support which is being led by the CS-2 H&ES testbed is exploring this topic. Covering hardware such as the Cerebras CS-2, GroqRack, SambaNova DataScale, Graphcore IPUs, and Habana Gaudi, this BoF intends to assemble developers experienced in, or are interested in using AI accelerators for HPC applications and to foster discussion and share challenges, experiences, success stories, and insights gained from using these accelerators.

Training the next generation of Research Software Engineers (RSEs) is crucial if we are going to be able to effectively leverage future exascale supercomputers. ExCALIBUR has had a focus on RSE training and skills development, and the UNIVERSE-HPC project are organising a BoF on Developing a Sustainable Future for HPC and RSE Skills: Training Pathways and Structures. This will leverage much of the experience and expertise developed and assembled by UNIVERSE-HPC, sharing this with the HPC community and obtaining feedback.

The xDSL project is involved in organising the Third Combined Workshop on Interactive and Urgent High-Performance Computing which will bring together the communities interested in interactive and urgent HPC workloads. Building on very successful sessions at previous ISC and SC conferences, this is an important topic because it challenges the HPC community to run non-traditional workloads on our HPC machines. By bringing in technologies and techniques such as in-situ, where simulations can be steered during execution, the potential flexibility and benefits are magnified for end users. However a major challenge is in how we can deliver this most effectively, which is the topic of the workshop.

The ExaBioSim project will have a project poster entitled Establishing the Accessible Computational Regimes for Biomolecular Simulations at Exascale, describing how the project is addressing biomolecular simulation at the exascale. The poster will describe how they are are creating a selection of showcase simulations that are representative of the community’s HPC needs, and then quantifying the performance, scaling and energy consumption of common simulation software. The modelling of DNA and integration of cryo-EM structures into existing simulations is especially challenging and potentially unlocked by exascale computing. Therefore using their insights to explore the limits of current parallel computing and see if these can be increased using multiscale modelling and code coupling has significant potential benefits.

Working closely with the ExCALIBUR tasking project, a partner project have had a research poster accepted on targetDART. This is about research that is developing a task-based approach for highly scalable simulation software which mitigates load-imbalance on heterogenous systems through dynamic, adaptive and reactive distribution of computational load across compute resources. As our systems become more diverse, and HPC spills out into other domains, then being able to run workloads effectively across the computing continuum is key, which this research contributes towards.