Timezone: »
Poster
Learning Maximum-A-Posteriori Perturbation Models for Structured Prediction in Polynomial Time
Asish Ghoshal · Jean Honorio
MAP perturbation models have emerged as a powerful framework for inference in structured prediction. Such models provide a way to efficiently sample from the Gibbs distribution and facilitate predictions that are robust to random noise. In this paper, we propose a provably polynomial time randomized algorithm for learning the parameters of perturbed MAP predictors. Our approach is based on minimizing a novel Rademacher-based generalization bound on the expected loss of a perturbed MAP predictor, which can be computed in polynomial time. We obtain conditions under which our randomized learning algorithm can guarantee generalization to unseen examples.
Author Information
Asish Ghoshal (Purdue University)
Jean Honorio (Purdue University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Learning Maximum-A-Posteriori Perturbation Models for Structured Prediction in Polynomial Time »
Thu. Jul 12th 09:00 -- 09:20 AM Room A5
More from the Same Authors
-
2023 Poster: Exact Inference in High-order Structured Prediction »
Chuyang Ke · Jean Honorio -
2022 Poster: A Simple Unified Framework for High Dimensional Bandit Problems »
Wenjie Li · Adarsh Barik · Jean Honorio -
2022 Spotlight: A Simple Unified Framework for High Dimensional Bandit Problems »
Wenjie Li · Adarsh Barik · Jean Honorio -
2022 Poster: Sparse Mixed Linear Regression with Guarantees: Taming an Intractable Problem with Invex Relaxation »
Adarsh Barik · Jean Honorio -
2022 Spotlight: Sparse Mixed Linear Regression with Guarantees: Taming an Intractable Problem with Invex Relaxation »
Adarsh Barik · Jean Honorio -
2021 Poster: Meta Learning for Support Recovery in High-dimensional Precision Matrix Estimation »
Qian Zhang · Yilin Zheng · Jean Honorio -
2021 Poster: A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning »
Abi Komanduru · Jean Honorio -
2021 Spotlight: A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning »
Abi Komanduru · Jean Honorio -
2021 Spotlight: Meta Learning for Support Recovery in High-dimensional Precision Matrix Estimation »
Qian Zhang · Yilin Zheng · Jean Honorio -
2019 Poster: Optimality Implies Kernel Sum Classifiers are Statistically Efficient »
Raphael Meyer · Jean Honorio -
2019 Oral: Optimality Implies Kernel Sum Classifiers are Statistically Efficient »
Raphael Meyer · Jean Honorio