Exascale Data Testbed

StatusAvailable / Under development
Access arrangementsContact Dr OG Parchment
OrganisationsUniversity of Cambridge, Dell, Intel, Altair
Project linkageUKAEA MAST Experiment (visualisation of plasma simulations)
Excalidata: Addressing I/O and Workflows at Exascale DiRAC: Optimising I/O for AREPO (Cosmological hydrodynamics) Excalistore, IRIS, UKSRC (SKA)

This testbed utilised HPC systems development, deployment and operational skills housed within the Cambridge Research Computing Service to build a next generation high performance PCI-Gen-4 solid state I/O testbed utilising a range of file systems including Lustre, Intel DAOS, BeeGFS and HDF5 on state-of-the-art solid state storage hardware.

The system is based on Intel PCI Gen-4 NVMe drives and Optane Data Centre Persistent Memory. The project saw the deployment of the UK’s fastest HPC storage testbed delivering over 500GB/s bandwidth and over 20 million IOPS of raw I/O performance. This was deployed across applications via a range of HPC file systems such as Lustre, Intel DAOS or BeeGFS as well as other more low level direct I/O protocols.

In addition to the I/O hardware and various file system technologies the testbed is configured with comprehensive system level telemetry monitoring capability provided by the UKRI funded Scientific OpenStack middleware layer combined with a range of other more specialised application I/O profiling tools. The UK Scientific OpenStack is a HPC middleware layer developed at Cambridge and funded by over 4 years investment from STFC, EPSRC and MRC.

This project is supported directly by Intel in terms of hardware, staff effort and strong co-design work in collaboration with Intel engineers developing the DAOS file system.

Latest news