Skip to yearly menu bar Skip to main content


Spotlight Poster

Improved Operator Learning by Orthogonal Attention

Zipeng Xiao · Zhongkai Hao · Bokai Lin · Zhijie Deng · Hang Su

Hall C 4-9 #200
[ ] [ Paper PDF ]
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

This work presents orthogonal attention for constructing neural operators to serve as surrogates to model the solutions of a family of Partial Differential Equations (PDEs). The motivation is that the kernel integral operator, which is usually at the core of neural operators, can be reformulated with orthonormal eigenfunctions. Inspired by the success of the neural approximation of eigenfunctions (Deng et al., 2022), we opt to directly parameterize the involved eigenfunctions with flexible neural networks (NNs), based on which the input function is then transformed by the rule of kernel integral. Surprisingly, the resulting NN module bears a striking resemblance to regular attention mechanisms, albeit without softmax. Instead, it incorporates an orthogonalization operation that provides regularization during model training and helps mitigate overfitting, particularly in scenarios with limited data availability. In practice, the orthogonalization operation can be implemented with minimal additional overheads. Experiments on six standard neural operator benchmark datasets comprising both regular and irregular geometries show that our method can outperform competing baselines with decent margins.

Chat is not available.