Timezone: »
While Bayesian neural networks (BNNs) provide a sound and principled alternative to standard neural networks, an artificial sharpening of the posterior usually needs to be applied to reach comparable performance. This is in stark contrast to theory, dictating that given an adequate prior and a well-specified model, the untempered Bayesian posterior should achieve optimal performance. Despite the community's extensive efforts, the observed gains in performance still remain disputed with several plausible causes pointing at its origin. While data augmentation has been empirically recognized as one of the main drivers of this effect, a theoretical account of its role, on the other hand, is largely missing. In this work we identify two interlaced factors concurrently influencing the strength of the cold posterior effect, namely the correlated nature of augmentations and the degree of invariance of the employed model to such transformations. By theoretically analyzing simplified settings, we prove that tempering implicitly reduces the misspecification arising from modeling augmentations as i.i.d. data. The temperature mimics the role of the effective sample size, reflecting the gain in information provided by the augmentations. We corroborate our theoretical findings with extensive empirical evaluations, scaling to realistic BNNs. By relying on the framework of group convolutions, we experiment with models of varying inherent degree of invariance, confirming its hypothesized relationship with the optimal temperature.
Author Information
Gregor Bachmann (ETH Zurich)
Lorenzo Noci (ETH Zürich)
Thomas Hofmann (ETH Zurich)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Oral: How Tempering Fixes Data Augmentation in Bayesian Neural Networks »
Wed. Jul 20th 02:30 -- 02:50 PM Room Room 301 - 303
More from the Same Authors
-
2023 Poster: The Hessian perspective into the Nature of Convolutional Neural Networks »
Sidak Pal Singh · Thomas Hofmann · Bernhard Schölkopf -
2023 Poster: Random Teachers are Good Teachers »
Felix Sarnthein · Gregor Bachmann · Sotiris Anagnostidis · Thomas Hofmann -
2021 Poster: Uniform Convergence, Adversarial Spheres and a Simple Remedy »
Gregor Bachmann · Seyed Moosavi · Thomas Hofmann -
2021 Spotlight: Uniform Convergence, Adversarial Spheres and a Simple Remedy »
Gregor Bachmann · Seyed Moosavi · Thomas Hofmann -
2020 Poster: Constant Curvature Graph Convolutional Networks »
Gregor Bachmann · Gary Becigneul · Octavian Ganea -
2019 Poster: The Odds are Odd: A Statistical Test for Detecting Adversarial Examples »
Kevin Roth · Yannic Kilcher · Thomas Hofmann -
2019 Oral: The Odds are Odd: A Statistical Test for Detecting Adversarial Examples »
Kevin Roth · Yannic Kilcher · Thomas Hofmann -
2018 Poster: A Distributed Second-Order Algorithm You Can Trust »
Celestine Mendler-Dünner · Aurelien Lucchi · Matilde Gargiani · Yatao Bian · Thomas Hofmann · Martin Jaggi -
2018 Oral: A Distributed Second-Order Algorithm You Can Trust »
Celestine Mendler-Dünner · Aurelien Lucchi · Matilde Gargiani · Yatao Bian · Thomas Hofmann · Martin Jaggi -
2018 Poster: Escaping Saddles with Stochastic Gradients »
Hadi Daneshmand · Jonas Kohler · Aurelien Lucchi · Thomas Hofmann -
2018 Poster: Hyperbolic Entailment Cones for Learning Hierarchical Embeddings »
Octavian-Eugen Ganea · Gary Becigneul · Thomas Hofmann -
2018 Oral: Escaping Saddles with Stochastic Gradients »
Hadi Daneshmand · Jonas Kohler · Aurelien Lucchi · Thomas Hofmann -
2018 Oral: Hyperbolic Entailment Cones for Learning Hierarchical Embeddings »
Octavian-Eugen Ganea · Gary Becigneul · Thomas Hofmann