Oral
Multicalibration as Boosting for Regression
Ira Globus-Harris · Declan Harrison · Michael Kearns · Aaron Roth · Jessica Sorrell
Ballroom C
Abstract:
We study the connection between multicalibration and boosting for squared error regression. First we prove a useful characterization of multicalibration in terms of a swap regret'' like condition on squared error. Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class that makes use only of a standard squared error regression oracle for . We give a weak learning assumption on that ensures convergence to Bayes optimality without the need to make any realizability assumptions --- giving us an agnostic boosting algorithm for regression. We then show that our weak learning assumption on is both necessary and sufficient for multicalibration with respect to to imply Bayes optimality, answering an open question. We also show that if satisfies our weak learning condition relative to another class then multicalibration with respect to implies multicalibration with respect to . Finally we investigate the empirical performance of our algorithm experimentally.
Chat is not available.