Skip to yearly menu bar Skip to main content


Oral

Optimality Implies Kernel Sum Classifiers are Statistically Efficient

Raphael Meyer · Jean Honorio

Abstract:

We propose a novel combination of optimization tools with learning theory bounds in order to analyze the sample complexity of optimal classifiers. This contrasts the typical learning theoretic results which hold for all (potentially suboptimal) classifiers. Our work also justifies assumptions made in prior work on multiple kernel learning. As a byproduct of this analysis, we provide a new form of Rademacher hypothesis sets for considering optimal classifiers.

Chat is not available.