Hyperbolic Neural Operator
jieyuan pei ⋅ Zhuoxuan Li ⋅ Wei Li ⋅ Haobo Zhang ⋅ jiawei jiang ⋅ Jianwei Zheng
Abstract
Neural operators have emerged as powerful surrogates for solving PDEs, significantly accelerating scientific computation. While transformer-based architectures offer unmatched flexibility for irregular domains, they suffer from a fundamental efficiency gap: standard attention mechanisms assign uniform interaction budgets to all token pairs, neglecting the physical reality that far-field interactions are often compressible. To address this mismatch, we draw inspiration from classical fast solvers that exploit hierarchical near-far decompositions. We further observe that embedding such tree-structured hierarchies in Euclidean space incurs inherent distortion, whereas hyperbolic space naturally accommodates exponential branching. Consequently, we propose \textbf{Hyperbolic Neural Operator (HNO)}, which leverages intrinsic hyperbolic geometry to instantiate a continuous Gibbs kernel based on stabilized geodesic distances on the Lorentz hyperboloid. This design imposes a geometric inductive bias that naturally enjoys robust multi-scale routing akin to the Fast Multipole Method (FMM), yet within a unified, learnable attention mechanism. Empirically, HNO achieves state-of-the-art accuracy on six PDE benchmarks and two large-scale unstructured CFD tasks, reducing the mean relative $\ell_2$ error by up to 40\% compared to leading baselines. Codes are attached and will be available online.
Successful Page Load