Massively Parallel Particle Hydrodynamics for Engineering and Astrophysics

SPH (smoothed particle hydrodynamics), and Lagrangian approaches to hydrodynamics in general, are a powerful approach to hydrodynamics problems. In this scheme, the fluid is represented by a large number of particles and each particle tracks a Lagrangian element, moving with the flow. The scheme does not require a predefined grid making it very suitable for tracking flows with moving boundaries, particularly flows with free surfaces, and problems that involve the mixing of different fluids, flows with physically active elements or large dynamic range. Massively parallel simulations with a billion to hundreds of billions of particles offer the potential for revolutionising our understanding of the Universe and will empower engineering applications of unprecedented scale; ranging from the end-to-end simulation of transients (such as a bird strike events inside a jet engine), to simulation of tsunami waves over-running a series of defensive walls. The group has started its research using two recent codes that highlight the key issues:

SWIFT – (SPH with Interdependent Fine-grained Tasking) implements a cutting-edge approach to task-based parallelism. Breaking the problem up into a series of inter-dependent tasks delivers great flexibility around scheduling and allows communication tasks to be entirely overlapped with communication itself. The code uses a timestep hierarchy to focus computational effort where it is most need in response to the problems. Feature directions for SWIFT include: Fine-Grained Task Parallelism, WithinTask Parallelism, Adaptive Time stepping, Asynchronous Communication and Adaptive Domain methods.

DualSPHysics – (https://dual.sphysics.org) is a state-of-the-art GPU enabled SPH code. Its focus is upon the processing of groups of identical particles quickly by exploiting the extreme vector processing capability of current vector processing units. It draws speed from effective use of GPU accelerators to execute SPH operations on large groups of identical particles. This allows the code to gain from exceptional parallel execution performance. The challenge is to effectively connect multiple GPUs across large numbers of inter-connected computing nodes.

The working group is designing an optimal approach for exascale SPH simulation. It will draw upon the experience of the teams developing SWIFT and DualPHYsics to understand the limitations of the two codes and identify how the best features can be combined and used to design an exascale SPH simulation code. The challenges that are being addressed are:

  • Optimal algorithms for Exascale performance – Addressing the best approaches for adaptive time-stepping and out-of-time integration, and adaptive domain decomposition. The first allows different spatial regions to be integrated forward in time optimally, the second allows regions to be optimally distributed over the hardware.
  • Modularisation and Separation of Concerns – Future codes need to be flexible and modularised, so that a separation can be achieved between integration routines, task scheduling and physics modules. This will make the code future-proof and easy to adapt to new science domain requirements and computing hardware.
  • CPU/GPU performance optimisation – Next generation hardware will require specific techniques to be developed to optimally advance particles in the SPH scheme. The working group is building upon the programming expertise gained in DualSPHysics to allow efficient GPU use across multiple nodes.
  • Communication performance optimisation – Separated computational regions need to exchange information at their boundaries. This can be done asynchronously, so that the time-lag of communication does not slow computation. While this has been demonstrated on current systems, exascale hardware will overload current subsystems, and a new solution is urgently needed.