Timezone: »
Oral
Optimality Implies Kernel Sum Classifiers are Statistically Efficient
Raphael Meyer · Jean Honorio
We propose a novel combination of optimization tools with learning theory bounds in order to analyze the sample complexity of optimal classifiers. This contrasts the typical learning theoretic results which hold for all (potentially suboptimal) classifiers. Our work also justifies assumptions made in prior work on multiple kernel learning. As a byproduct of this analysis, we provide a new form of Rademacher hypothesis sets for considering optimal classifiers.
Author Information
Raphael Meyer (Purdue University)
Graduating with BS from Purdue University, Spring 2019. Joining NYU Tandon, Fall 2019.
Jean Honorio (Purdue University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Optimality Implies Kernel Sum Classifiers are Statistically Efficient »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #204
More from the Same Authors
-
2023 Poster: Exact Inference in High-order Structured Prediction »
Chuyang Ke · Jean Honorio -
2022 Poster: A Simple Unified Framework for High Dimensional Bandit Problems »
Wenjie Li · Adarsh Barik · Jean Honorio -
2022 Spotlight: A Simple Unified Framework for High Dimensional Bandit Problems »
Wenjie Li · Adarsh Barik · Jean Honorio -
2022 Poster: Sparse Mixed Linear Regression with Guarantees: Taming an Intractable Problem with Invex Relaxation »
Adarsh Barik · Jean Honorio -
2022 Spotlight: Sparse Mixed Linear Regression with Guarantees: Taming an Intractable Problem with Invex Relaxation »
Adarsh Barik · Jean Honorio -
2021 Poster: Meta Learning for Support Recovery in High-dimensional Precision Matrix Estimation »
Qian Zhang · Yilin Zheng · Jean Honorio -
2021 Poster: A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning »
Abi Komanduru · Jean Honorio -
2021 Spotlight: A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning »
Abi Komanduru · Jean Honorio -
2021 Spotlight: Meta Learning for Support Recovery in High-dimensional Precision Matrix Estimation »
Qian Zhang · Yilin Zheng · Jean Honorio -
2018 Poster: Learning Maximum-A-Posteriori Perturbation Models for Structured Prediction in Polynomial Time »
Asish Ghoshal · Jean Honorio -
2018 Oral: Learning Maximum-A-Posteriori Perturbation Models for Structured Prediction in Polynomial Time »
Asish Ghoshal · Jean Honorio