Significant progress has been made recently using petascale computing, and computational fluid dynamics (CFD) is now a critical complement to experiments and theories. Turbulent flow simulations at the exascale will require a significant reformulation of existing flow solvers, implementation of new physics, and development of a more nuanced problem formulation. The exascale does however offer the potential to deliver significant advances in our quest for a greener future, relying in large part upon a deeper understanding of the overarching subject of turbulence.
Exploiting the full potential of the current and next generation of supercomputers poses many challenges to the turbulence community, such as the sustainability of the solvers, the uncertainty of architectures, requirements around mesh complexity and memory footprint for high value complex turbulent flows problems of near-term practical importance, a constant need for new turbulence models and algorithms/numerical methods, issues around visualisation associated with data volume, velocity and veracity, an increased computation to communication ratio and the likelihood of I/O bottlenecks. While the Navier-Stokes equations constitute a broadly accepted mathematical model to describe the motion and structure of a turbulent flow, their solutions can be extremely challenging to obtain due to the chaotic and inherently multi-scale nature of turbulence. The smallest scales impact the largest scales, and small changes to boundary conditions, initial conditions, or grid resolution, for example, can have a dramatic impact upon the solution, posing significant challenges for VVUQ (Verification, Validation and Uncertainty Quantification) and for creating “actionable”, predictive simulation capability.
Traditional approaches to software development in CFD involving the production of static, hand-written code (usually in C/C++ or in Fortran) to perform the numerical discretisation and solution of the governing equations are not sustainable anymore with the rapid evolution of High Performance Computing (HPC) systems and architectures. For example, explicitly inserting the necessary calls in a code to MPI enables the execution on multicore Central Processing Units (CPUs) hardware. However, the code would need to be re-written to run on alternative platforms based on Graphic Processor Units (GPUs), including calls to language extensions such as Compute Unified Device Architecture (CUDA), Open Compute Language (OpenCL) and/or new libraries. The only practically viable approach to addressing the above issues is through the development of appropriate application-oriented, high-level programming abstractions in which the code developer specifies what is to be computed, which numerical methods should be used, but without specifying how it should be implemented / programmed to execute on hardware. Applications should ideally be written in an environment that makes it possible to target a variety of parallel hardware without requiring significant manual modifications.
Some of the flow solvers of the UK Turbulence Consortium (a group of over 60 academics and researchers from across 25 UK institutions, committed to undertaking high quality, world leading turbulence simulations on supercomputers) are currently being re-engineered following this approach, with the OPS library. OPS is a programming abstraction for writing multi-block structured mesh algorithms, and the corresponding software library and code translation tools to enable automatic parallelisation of the high-level code. To demonstrate the feasibility of the proposed approach based upon OPS, two proof-of-concept studies will be carried out:
- A high-fidelity simulation of a full-scale offshore wind farm during operation, within the Xcompact3d framework,
- A high-fidelity simulation of a NACA0012 airfoil with shock-capturing at high-speed, within the OpenSBLI framework.
Outreach activities
Podcast “Turbulence at the exascale”