Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: 2nd Workshop on Advancing Neural Network Training : Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024)

Online Training from Numerical Simulations

Bruno Raffin

[ ]
Sat 27 Jul 12:10 a.m. PDT — 12:40 a.m. PDT

Abstract:

Traditionally, scientists and engineers rely on computationally intensive numerical solvers to address Partial Differential Equations (PDEs). However, deep learning is emerging as a promising alternative for obtaining rapid PDE solutions. Typically, deep surrogate models are trained on synthetic data generated by these solvers, which is stored on disk and subsequently retrieved for training. In this talk, we explore the challenges and benefits of enabling online training of deep models concurrent with data generation from running simulations. This approach offers several advantages: it circumvents I/O operations, often the bottleneck in supercomputing; allows training on datasets larger than available storage capacities, potentially improving generalization; and introduces the possibility of steering data generation for enhanced efficiency. However, online training is subject to specific biases that must be mitigated through adapted buffering techniques. Our presentation will draw upon research findings from the development of the Melissa framework, which is designed for large-scale online training. By addressing these topics, we aim to provide insights into the future of PDE solving using deep learning techniques and the potential for more efficient, scalable computational methods in scientific and engineering applications.

Chat is not available.