Timezone: »
Stochastic variational inference is an established way to carry out approximate Bayesian inference for deep models. While there have been effective proposals for good initializations for loss minimization in deep learning, far less attention has been devoted to the issue of initialization of stochastic variational inference. We address this by proposing a novel layer-wise initialization strategy based on Bayesian linear models. The proposed method is extensively validated on regression and classification tasks, including Bayesian DeepNets and ConvNets, showing faster and better convergence compared to alternatives inspired by the literature on initializations for loss minimization.
Author Information
Simone Rossi (EURECOM)
PhD Student in Bayesian Deep Learning at Sorbonne Universite/EURECOM (previously MSc in Electronic Engineering and MSc in Computer Engineering). Under the supervision of Prof. Maurizio Filippone, I'm investigating new and exciting problems in Deep Probabilistic Modelling, Approximate Inference, Bayesian Deep Learning and Variational Inference.
Pietro Michiardi (EURECOM)
Pietro Michiardi received his M.S. in Computer Science from EURECOM and his M.S. in Electrical Engineering from Politecnico di Torino. Pietro received his Ph.D. in Computer Science from Telecom ParisTech (former ENST, Paris), and his HDR (Habilitation) from UNSA. Today, Pietro is a Professor of Computer Science at EURECOM, where he leads the Distributed System Group, which blends theory and system research focusing on large-scale distributed systems (including data processing and data storage), and scalable algorithm design to mine massive amounts of data. Additional research interests are on system, algorithmic, and performance evaluation aspects of distributed systems. Pietro has been appointed as Data Science department head in May 2016.
Maurizio Filippone (Eurecom)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Good Initializations of Variational Bayes for Deep Models »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom #83
More from the Same Authors
-
2022 : A New Look on Diffusion Times for Score-based Generative Models »
Giulio Franzese · Simone Rossi · Lixuan YANG · alessandro finamore · Dario Rossi · Maurizio Filippone · Pietro Michiardi -
2023 : Improving Training of Likelihood-based Generative Models with Gaussian Homotopy »
Ba-Hien Tran · Giulio Franzese · Pietro Michiardi · Maurizio Filippone -
2023 Poster: Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes »
Ba-Hien Tran · Babak Shahbaba · Stephan Mandt · Maurizio Filippone -
2022 Poster: Revisiting the Effects of Stochasticity for Hamiltonian Samplers »
Giulio Franzese · Dimitrios Milios · Maurizio Filippone · Pietro Michiardi -
2022 Spotlight: Revisiting the Effects of Stochasticity for Hamiltonian Samplers »
Giulio Franzese · Dimitrios Milios · Maurizio Filippone · Pietro Michiardi -
2021 Poster: Sparse within Sparse Gaussian Processes using Neighbor Information »
Gia-Lac Tran · Dimitrios Milios · Pietro Michiardi · Maurizio Filippone -
2021 Spotlight: Sparse within Sparse Gaussian Processes using Neighbor Information »
Gia-Lac Tran · Dimitrios Milios · Pietro Michiardi · Maurizio Filippone -
2021 Poster: An Identifiable Double VAE For Disentangled Representations »
Graziano Mita · Maurizio Filippone · Pietro Michiardi -
2021 Spotlight: An Identifiable Double VAE For Disentangled Representations »
Graziano Mita · Maurizio Filippone · Pietro Michiardi -
2018 Poster: Constraining the Dynamics of Deep Probabilistic Models »
Marco Lorenzi · Maurizio Filippone -
2018 Oral: Constraining the Dynamics of Deep Probabilistic Models »
Marco Lorenzi · Maurizio Filippone -
2017 Poster: Random Feature Expansions for Deep Gaussian Processes »
Kurt Cutajar · Edwin Bonilla · Pietro Michiardi · Maurizio Filippone -
2017 Talk: Random Feature Expansions for Deep Gaussian Processes »
Kurt Cutajar · Edwin Bonilla · Pietro Michiardi · Maurizio Filippone