Timezone: »
Posterior distribution approximation is a central task in Bayesian inference. Stochastic gradient Langevin dynamics (SGLD) and its extensions have been practically used and theoretically studied. While SGLD updates a single particle at a time, ensemble methods that update multiple particles simultaneously have been recently gathering attention. Compared with the naive parallel-chain SGLD that updates multiple particles independently, ensemble methods update particles with their interactions. Thus, these methods are expected to be more particle-efficient than the naive parallel-chain SGLD because particles can be aware of other particles' behavior through their interactions. Although ensemble methods numerically demonstrated their superior performance, no theoretical guarantee exists to assure such particle-efficiency and it is unclear whether those ensemble methods are really superior to the naive parallel-chain SGLD in the non-asymptotic settings. To cope with this problem, we propose a novel ensemble method that uses a non-reversible Markov chain for the interaction, and we present a non-asymptotic theoretical analysis for our method. Our analysis shows that, for the first time, the interaction causes a faster convergence rate than the naive parallel-chain SGLD in the non-asymptotic setting if the discretization error is appropriately controlled. Numerical experiments show that we can control the discretization error by tuning the interaction appropriately.
Author Information
Futoshi Futami (NTT)
Issei Sato (University of Tokyo / RIKEN)
Masashi Sugiyama (RIKEN / The University of Tokyo)
More from the Same Authors
-
2023 : Invited Talk 3: Masashi Sugiyama (RIKEN & UTokyo) - Data distribution shift »
Masashi Sugiyama -
2023 : Enriching Disentanglement: Definitions to Metrics »
Yivan Zhang · Masashi Sugiyama -
2023 Poster: GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks »
Salah GHAMIZI · Jingfeng ZHANG · Maxime Cordy · Mike Papadakis · Masashi Sugiyama · YVES LE TRAON -
2023 Poster: Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation »
Ruijiang Dong · Feng Liu · Haoang Chi · Tongliang Liu · Mingming Gong · Gang Niu · Masashi Sugiyama · Bo Han -
2023 Poster: Optimality of Thompson Sampling with Noninformative Priors for Pareto Bandits »
Jongyeong Lee · Junya Honda · Chao-Kai Chiang · Masashi Sugiyama -
2023 Poster: A Category-theoretical Meta-analysis of Definitions of Disentanglement »
Yivan Zhang · Masashi Sugiyama -
2022 Poster: Adversarial Attack and Defense for Non-Parametric Two-Sample Tests »
Xilie Xu · Jingfeng Zhang · Feng Liu · Masashi Sugiyama · Mohan Kankanhalli -
2022 Poster: Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum »
Zeke Xie · Xinrui Wang · Huishuai Zhang · Issei Sato · Masashi Sugiyama -
2022 Spotlight: Adversarial Attack and Defense for Non-Parametric Two-Sample Tests »
Xilie Xu · Jingfeng Zhang · Feng Liu · Masashi Sugiyama · Mohan Kankanhalli -
2022 Oral: Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum »
Zeke Xie · Xinrui Wang · Huishuai Zhang · Issei Sato · Masashi Sugiyama -
2022 Poster: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2022 Oral: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2021 Poster: Provably End-to-end Label-noise Learning without Anchor Points »
Xuefeng Li · Tongliang Liu · Bo Han · Gang Niu · Masashi Sugiyama -
2021 Poster: Learning Diverse-Structured Networks for Adversarial Robustness »
Xuefeng Du · Jingfeng Zhang · Bo Han · Tongliang Liu · Yu Rong · Gang Niu · Junzhou Huang · Masashi Sugiyama -
2021 Poster: CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection »
Hanshu YAN · Jingfeng Zhang · Gang Niu · Jiashi Feng · Vincent Tan · Masashi Sugiyama -
2021 Poster: Maximum Mean Discrepancy Test is Aware of Adversarial Attacks »
Ruize Gao · Feng Liu · Jingfeng Zhang · Bo Han · Tongliang Liu · Gang Niu · Masashi Sugiyama -
2021 Spotlight: CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection »
Hanshu YAN · Jingfeng Zhang · Gang Niu · Jiashi Feng · Vincent Tan · Masashi Sugiyama -
2021 Spotlight: Provably End-to-end Label-noise Learning without Anchor Points »
Xuefeng Li · Tongliang Liu · Bo Han · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Learning Diverse-Structured Networks for Adversarial Robustness »
Xuefeng Du · Jingfeng Zhang · Bo Han · Tongliang Liu · Yu Rong · Gang Niu · Junzhou Huang · Masashi Sugiyama -
2021 Spotlight: Maximum Mean Discrepancy Test is Aware of Adversarial Attacks »
Ruize Gao · Feng Liu · Jingfeng Zhang · Bo Han · Tongliang Liu · Gang Niu · Masashi Sugiyama -
2021 Poster: Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences »
Ikko Yamane · Junya Honda · Florian YGER · Masashi Sugiyama -
2021 Poster: Pointwise Binary Classification with Pairwise Confidence Comparisons »
Lei Feng · Senlin Shu · Nan Lu · Bo Han · Miao Xu · Gang Niu · Bo An · Masashi Sugiyama -
2021 Poster: Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification »
Nan Lu · Shida Lei · Gang Niu · Issei Sato · Masashi Sugiyama -
2021 Poster: Learning from Similarity-Confidence Data »
Yuzhou Cao · Lei Feng · Yitian Xu · Bo An · Gang Niu · Masashi Sugiyama -
2021 Poster: Confidence Scores Make Instance-dependent Label-noise Learning Possible »
Antonin Berthon · Bo Han · Gang Niu · Tongliang Liu · Masashi Sugiyama -
2021 Poster: Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization »
Yivan Zhang · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Learning from Similarity-Confidence Data »
Yuzhou Cao · Lei Feng · Yitian Xu · Bo An · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Pointwise Binary Classification with Pairwise Confidence Comparisons »
Lei Feng · Senlin Shu · Nan Lu · Bo Han · Miao Xu · Gang Niu · Bo An · Masashi Sugiyama -
2021 Spotlight: Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences »
Ikko Yamane · Junya Honda · Florian YGER · Masashi Sugiyama -
2021 Spotlight: Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification »
Nan Lu · Shida Lei · Gang Niu · Issei Sato · Masashi Sugiyama -
2021 Oral: Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization »
Yivan Zhang · Gang Niu · Masashi Sugiyama -
2021 Oral: Confidence Scores Make Instance-dependent Label-noise Learning Possible »
Antonin Berthon · Bo Han · Gang Niu · Tongliang Liu · Masashi Sugiyama -
2021 Poster: Lower-Bounded Proper Losses for Weakly Supervised Classification »
Shuhei M Yoshida · Takashi Takenouchi · Masashi Sugiyama -
2021 Poster: Classification with Rejection Based on Cost-sensitive Classification »
Nontawat Charoenphakdee · Zhenghang Cui · Yivan Zhang · Masashi Sugiyama -
2021 Spotlight: Classification with Rejection Based on Cost-sensitive Classification »
Nontawat Charoenphakdee · Zhenghang Cui · Yivan Zhang · Masashi Sugiyama -
2021 Spotlight: Lower-Bounded Proper Losses for Weakly Supervised Classification »
Shuhei M Yoshida · Takashi Takenouchi · Masashi Sugiyama -
2021 Poster: Large-Margin Contrastive Learning with Distance Polarization Regularizer »
Shuo Chen · Gang Niu · Chen Gong · Jun Li · Jian Yang · Masashi Sugiyama -
2021 Poster: Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization »
Zeke Xie · Li Yuan · Zhanxing Zhu · Masashi Sugiyama -
2021 Spotlight: Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization »
Zeke Xie · Li Yuan · Zhanxing Zhu · Masashi Sugiyama -
2021 Spotlight: Large-Margin Contrastive Learning with Distance Polarization Regularizer »
Shuo Chen · Gang Niu · Chen Gong · Jun Li · Jian Yang · Masashi Sugiyama -
2020 Poster: Few-shot Domain Adaptation by Causal Mechanism Transfer »
Takeshi Teshima · Issei Sato · Masashi Sugiyama -
2020 Poster: Do We Need Zero Training Loss After Achieving Zero Training Error? »
Takashi Ishida · Ikko Yamane · Tomoya Sakai · Gang Niu · Masashi Sugiyama -
2020 Poster: Progressive Identification of True Labels for Partial-Label Learning »
Jiaqi Lv · Miao Xu · LEI FENG · Gang Niu · Xin Geng · Masashi Sugiyama -
2020 Poster: Online Dense Subgraph Discovery via Blurred-Graph Feedback »
Yuko Kuroki · Atsushi Miyauchi · Junya Honda · Masashi Sugiyama -
2020 Poster: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust »
Bo Han · Gang Niu · Xingrui Yu · QUANMING YAO · Miao Xu · Ivor Tsang · Masashi Sugiyama -
2020 Poster: Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels »
Yu-Ting Chou · Gang Niu · Hsuan-Tien (Tien) Lin · Masashi Sugiyama -
2020 Poster: Attacks Which Do Not Kill Training Make Adversarial Learning Stronger »
Jingfeng Zhang · Xilie Xu · Bo Han · Gang Niu · Lizhen Cui · Masashi Sugiyama · Mohan Kankanhalli -
2020 Poster: Variational Imitation Learning with Diverse-quality Demonstrations »
Voot Tangkaratt · Bo Han · Mohammad Emtiyaz Khan · Masashi Sugiyama -
2020 Poster: Learning with Multiple Complementary Labels »
LEI FENG · Takuo Kaneko · Bo Han · Gang Niu · Bo An · Masashi Sugiyama -
2020 Poster: Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks Using PAC-Bayesian Analysis »
Yusuke Tsuzuku · Issei Sato · Masashi Sugiyama -
2019 Poster: Classification from Positive, Unlabeled and Biased Negative Data »
Yu-Guan Hsieh · Gang Niu · Masashi Sugiyama -
2019 Poster: Complementary-Label Learning for Arbitrary Losses and Models »
Takashi Ishida · Gang Niu · Aditya Menon · Masashi Sugiyama -
2019 Oral: Complementary-Label Learning for Arbitrary Losses and Models »
Takashi Ishida · Gang Niu · Aditya Menon · Masashi Sugiyama -
2019 Oral: Classification from Positive, Unlabeled and Biased Negative Data »
Yu-Guan Hsieh · Gang Niu · Masashi Sugiyama -
2019 Poster: How does Disagreement Help Generalization against Label Corruption? »
Xingrui Yu · Bo Han · Jiangchao Yao · Gang Niu · Ivor Tsang · Masashi Sugiyama -
2019 Oral: How does Disagreement Help Generalization against Label Corruption? »
Xingrui Yu · Bo Han · Jiangchao Yao · Gang Niu · Ivor Tsang · Masashi Sugiyama -
2019 Poster: Imitation Learning from Imperfect Demonstration »
Yueh-Hua Wu · Nontawat Charoenphakdee · Han Bao · Voot Tangkaratt · Masashi Sugiyama -
2019 Poster: On Symmetric Losses for Learning from Corrupted Labels »
Nontawat Charoenphakdee · Jongyeong Lee · Masashi Sugiyama -
2019 Oral: Imitation Learning from Imperfect Demonstration »
Yueh-Hua Wu · Nontawat Charoenphakdee · Han Bao · Voot Tangkaratt · Masashi Sugiyama -
2019 Oral: On Symmetric Losses for Learning from Corrupted Labels »
Nontawat Charoenphakdee · Jongyeong Lee · Masashi Sugiyama -
2018 Poster: Classification from Pairwise Similarity and Unlabeled Data »
Han Bao · Gang Niu · Masashi Sugiyama -
2018 Oral: Classification from Pairwise Similarity and Unlabeled Data »
Han Bao · Gang Niu · Masashi Sugiyama -
2018 Poster: Does Distributionally Robust Supervised Learning Give Robust Classifiers? »
Weihua Hu · Gang Niu · Issei Sato · Masashi Sugiyama -
2018 Oral: Does Distributionally Robust Supervised Learning Give Robust Classifiers? »
Weihua Hu · Gang Niu · Issei Sato · Masashi Sugiyama -
2018 Poster: Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model »
Hideaki Imamura · Issei Sato · Masashi Sugiyama -
2018 Oral: Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model »
Hideaki Imamura · Issei Sato · Masashi Sugiyama -
2017 Poster: Learning Discrete Representations via Information Maximizing Self-Augmented Training »
Weihua Hu · Takeru Miyato · Seiya Tokui · Eiichi Matsumoto · Masashi Sugiyama -
2017 Talk: Learning Discrete Representations via Information Maximizing Self-Augmented Training »
Weihua Hu · Takeru Miyato · Seiya Tokui · Eiichi Matsumoto · Masashi Sugiyama -
2017 Poster: Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data »
Tomoya Sakai · Marthinus C du Plessis · Gang Niu · Masashi Sugiyama -
2017 Poster: Evaluating the Variance of Likelihood-Ratio Gradient Estimators »
Seiya Tokui · Issei Sato -
2017 Talk: Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data »
Tomoya Sakai · Marthinus C du Plessis · Gang Niu · Masashi Sugiyama -
2017 Talk: Evaluating the Variance of Likelihood-Ratio Gradient Estimators »
Seiya Tokui · Issei Sato