The emerging era of exascale computing will provide both opportunities and challenges. The raw compute power of such high-performance computing (HPC) hardware has the potential to revolutionize many areas of science and industry. However, novel computing algorithms and software must be developed to ensure the potential of HPC is realized.

Computational imaging, where the goal is to recover images of interest from raw data, acquired by some observational instrument, is one of the most widely encountered class of problem in science and industry, with myriad applications across astronomy, medicine, planetary and climate science, computer graphics and virtual reality, geophysics, molecular biology, and beyond. 

The rise of exascale computing, coupled with recent advances in instrumentation, is leading to novel and often huge datasets that, in principle, could be imaged for the first time in an interpretable manner at high fidelity. However, to unlock interpretable, high-fidelity imaging of big-data, novel methodological approaches, algorithms and software implementations are required — we are developing precisely these components as part of the Learned EXascale Computational Imaging (LEXCI) project. 

Firstly, whereas traditional computational imaging algorithms are based on relatively simple hand-crafted prior models of images, in LEXCI we learn appropriate image priors and physical instrument simulation models from data, leading to much more accurate representations. Our hybrid techniques will be guided by model-based approaches to ensure effectiveness, efficiency, generalizability and uncertainty quantification. 

Secondly, we are developing novel algorithmic structures that support highly parallelized and distributed implementations, for deployment across a wide range of modern HPC architectures. 

Thirdly, we are implementing these algorithms in professional research software. The structure of our algorithms not only allows computations to be distributed across multi-node architectures, but memory and storage requirements also.  

We are developing a tiered parallelization approach targeting both large-scale distributed-memory parallelization, for distributing work across processors and co-processors, and light-weight data parallelism through vectorization or light-weight threads, for distributing work on processors and co-processors. Our tiered parallelization approach ensures the software can be used across the full range of modern HPC systems.  Combined, these developments will provide a future computing paradigm to help usher in the era of exascale computational imaging. 

The resulting computational imaging framework will have widespread application, across radio interferometric imaging, magnetic resonance imaging, seismic imaging, computer graphics, and beyond. The resulting software will be deployed on the latest HPC computing resources to evaluate their performance and to feed back to the community the lessons learned, and techniques developed, to support the general advance of exascale computing. 

The software suite, while still under development, is already available at [SOPT](https://github.com/astro-informatics/sopt) and [PURIFY](https://github.com/astro-informatics/purify). 

Latest news