Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Machine Learning for Astrophysics

TNT: Vision Transformer for Turbulence Simulations

Yuchen Dang · Zheyuan Hu · Miles Cranmer · Michael Eickenberg · Shirley Ho


Abstract:

Turbulent dynamics is difficult to predict due to its multi-scale nature and sensitivity to small perturbations. Classical solvers of turbulence simulation generally operate on finer grids and are computationally inefficient. In this paper, we propose a Turbulence Neural Transformer (TNT), which is a learned machine learning (ML) simulator based on the Transformer architecture to predict turbulent dynamics on coarse grids. TNT extends the positional embeddings of vanilla transformer to a spatiotemporal setting to learn the representation in the 3D time-series domain, and applies Temporal Mutual Self-Attention (TMSA), which captures adjacent dependencies, to extract deep and dynamic features. TNT is capable of generating comparatively long-range predictions stably and accurately, and we show that TNT outperforms the state-of-the-art U-net-based simulator on all metrics evaluated. We also test the model performance with different components removed and evaluate robustness to different initial conditions. Although more experiments are needed, we conclude that TNT has great potential to outperform existing solvers and generalize to most simulation datasets.

Chat is not available.