Timezone: »
Symbolic equations are at the core of scientific discovery. The task of discovering the underlying equation from a set of input-output pairs is called symbolic regression. Traditionally, symbolic regression methods use hand-designed strategies that do not improve with experience. In this paper, we introduce the first symbolic regression method that leverages large scale pre-training. We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs. At test time, we query the model on a new set of points and use its output to guide the search for the equation. We show empirically that this approach can re-discover a set of well-known physical equations, and that it improves over time with more data and compute.
Author Information
Luca Biggio (ETH Zürich)
Tommaso Bendinelli (CSEM)
Alexander Neitz (Max Planck Institute for Intelligent Systems)
Aurelien Lucchi (ETH Zurich)
Giambattista Parascandolo (Max Planck Institute for Intelligent Systems and ETH Zurich)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Neural Symbolic Regression that scales »
Wed. Jul 21st 02:45 -- 02:50 AM Room
More from the Same Authors
-
2022 : Enhancing Unit-tests for Invariance Discovery »
Piersilvio De Bartolomeis · Antonio Orvieto · Giambattista Parascandolo -
2023 : On the Advantage of Lion Compared to signSGD with Momentum »
Alessandro Noiato · Luca Biggio · Antonio Orvieto -
2023 Poster: Controllable Neural Symbolic Regression »
Tommaso Bendinelli · Luca Biggio · Pierre-Alexandre Kamienny -
2023 Poster: An SDE for Modeling SAM: Theory and Insights »
Enea Monzio Compagnoni · Luca Biggio · Antonio Orvieto · Frank Proske · Hans Kersting · Aurelien Lucchi -
2023 Poster: Predicting Ordinary Differential Equations with Transformers »
Sören Becker · Michal Klein · Alexander Neitz · Giambattista Parascandolo · Niki Kilbertus -
2020 Poster: Randomized Block-Diagonal Preconditioning for Parallel Learning »
Celestine Mendler-Dünner · Aurelien Lucchi -
2020 Poster: An Accelerated DFO Algorithm for Finite-sum Convex Functions »
Yuwen Chen · Antonio Orvieto · Aurelien Lucchi -
2018 Poster: Tempered Adversarial Networks »
Mehdi S. M. Sajjadi · Giambattista Parascandolo · Arash Mehrjou · Bernhard Schölkopf -
2018 Poster: A Distributed Second-Order Algorithm You Can Trust »
Celestine Mendler-Dünner · Aurelien Lucchi · Matilde Gargiani · Yatao Bian · Thomas Hofmann · Martin Jaggi -
2018 Oral: A Distributed Second-Order Algorithm You Can Trust »
Celestine Mendler-Dünner · Aurelien Lucchi · Matilde Gargiani · Yatao Bian · Thomas Hofmann · Martin Jaggi -
2018 Oral: Tempered Adversarial Networks »
Mehdi S. M. Sajjadi · Giambattista Parascandolo · Arash Mehrjou · Bernhard Schölkopf -
2018 Poster: Learning Independent Causal Mechanisms »
Giambattista Parascandolo · Niki Kilbertus · Mateo Rojas-Carulla · Bernhard Schölkopf -
2018 Poster: Escaping Saddles with Stochastic Gradients »
Hadi Daneshmand · Jonas Kohler · Aurelien Lucchi · Thomas Hofmann -
2018 Oral: Learning Independent Causal Mechanisms »
Giambattista Parascandolo · Niki Kilbertus · Mateo Rojas-Carulla · Bernhard Schölkopf -
2018 Oral: Escaping Saddles with Stochastic Gradients »
Hadi Daneshmand · Jonas Kohler · Aurelien Lucchi · Thomas Hofmann -
2017 Poster: Sub-sampled Cubic Regularization for Non-convex Optimization »
Jonas Kohler · Aurelien Lucchi -
2017 Talk: Sub-sampled Cubic Regularization for Non-convex Optimization »
Jonas Kohler · Aurelien Lucchi