Fractional is Better: Learnable Derivative Orders in Neural Operator Learning
Fares Mehouachi ⋅ Saif Jabari
Abstract
Neural operators learn mappings between function spaces, enabling fast PDE surrogates. Despite architectural diversity, these methods often share a common input representation: raw coordinate-value pairs. This ignores the differential structure that defines the underlying physics. We study whether derivative features can improve neural operator learning. Through Picard iteration on mild solutions, we show that derivatives of the input naturally enter PDE solution operators, and we prove that providing them improves approximation rates substantially. But the optimal derivative order is not what one might expect. We show that the statistically optimal order is strictly less than the PDE order, for any finite sample size. This gap arises from a bias-variance tradeoff: higher-order derivatives carry more information but amplify noise. We characterize this tradeoff in closed form and show that learning the derivative order from data achieves automatic spectral regularization. We introduce $\partial$-NO (derivative-augmented neural operators), a simple augmentation that provides learnable fractional derivative features to any neural operator backbone. Across benchmarks, this consistently improves accuracy. Learned orders reflect dominant PDE structure while adapting to finite-sample constraints, confirming the theory.
Successful Page Load