Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science

The Bearable Lightness of Big Data: Towards Massive Public Datasets in Scientific Machine Learning

Wai Tong Chung · Ki Jung · Jacqueline Chen · Matthias Ihme


Abstract:

In general, large datasets enable deep learning models to perform with good accuracy and generalizability. However, massive high-fidelity simulation datasets (from molecular chemistry, astrophysics, computational fluid dynamics (CFD), etc.) can be challenging to curate due to dimensionality and storage constraints. Lossy compression algorithms can help mitigate limitations from storage, as long as the overall data fidelity is preserved. To illustrate this point, we demonstrate that deep learning models, trained and tested on data from a petascale CFD simulation, are robust to errors introduced during lossy compression in a semantic segmentation problem. Our results demonstrate that lossy compression algorithms offer a realistic pathway for exposing high-fidelity scientific data to open-source data repositories for building community datasets. In this paper, we outline, construct, and evaluate the requirements for establishing a big data framework for scientific machine learning.

Chat is not available.