Poster
in
Workshop: The Synergy of Scientific and Machine Learning Modelling (SynS & ML) Workshop
How important are specialized transforms in Neural Operators?
Ritam Majumdar · Shirish Karande · Lovekesh Vig
Keywords: [ generalization ] [ Deep Learning ] [ Transforms ] [ PDE solvers ]
Computational forward simulations of physical systems constrained by system of PDEs, Initial and Boundary values are proving to provide tremendous value for a variety of industrial domains. Transform-based Neural Operators like Fourier Neural Operators and Wavelet Neural Operators have received a lot of attention for their potential in providing fast scale-free simulations. In traditional analysis of signals, an optimal choice of a transform is critically dependent on the nature of the data. Therefore ideally all transformations should be learnable. Given that most of the considered transforms are linear. In this work, we seek to investigate what could be the cost in performance, if any, if all the transform layers are replaced by simple linear layers. We make a surprising observation that linear layers suffice to provide performance comparable to best-known transform-based Operators and seem to do so at possibly a compute time advantage as well. This raises the question about the importance of transform-based Operators.