Timezone: »
Boosting algorithms iteratively produce linear combinations of more and more base hypotheses and it has been observed experimentally that the generalization error keeps improving even after achieving zero training error. One popular explanation attributes this to improvements in margins. A common goal in a long line of research, is to obtain large margins using as few base hypotheses as possible, culminating with the AdaBoostV algorithm by Rätsch and Warmuth [JMLR’05]. The AdaBoostV algorithm was later conjectured to yield an optimal trade-off between number of hypotheses trained and the minimal margin over all training points (Nie, Warmuth, Vishwanathan and Zhang [JMLR’13]). Our main contribution is a new algorithm refuting this conjecture. Furthermore, we prove a lower bound which implies that our new algorithm is optimal.
Author Information
Alexander Mathiasen (Aarhus University)
Kasper Green Larsen (Aarhus University, MADALGO)
Allan Grønlund (Aarhus University, MADALGO)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Optimal Minimal Margin Maximization with Boosting »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #263
More from the Same Authors
-
2021 Poster: CountSketches, Feature Hashing and the Median of Three »
Kasper Green Larsen · Rasmus Pagh · Jakub Tětek -
2021 Spotlight: CountSketches, Feature Hashing and the Median of Three »
Kasper Green Larsen · Rasmus Pagh · Jakub Tětek -
2020 Poster: Near-Tight Margin-Based Generalization Bounds for Support Vector Machines »
Allan Grønlund · Lior Kamma · Kasper Green Larsen