Timezone: »

 
Oral
Multicalibration as Boosting for Regression
Ira Globus-Harris · Declan Harrison · Michael Kearns · Aaron Roth · Jessica Sorrell

Thu Jul 27 06:56 PM -- 07:04 PM (PDT) @ Ballroom C
We study the connection between multicalibration and boosting for squared error regression. First we prove a useful characterization of multicalibration in terms of a ``swap regret'' like condition on squared error. Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class $\mathcal{H}$ that makes use only of a standard squared error regression oracle for $\mathcal{H}$. We give a weak learning assumption on $\mathcal{H}$ that ensures convergence to Bayes optimality without the need to make any realizability assumptions --- giving us an agnostic boosting algorithm for regression. We then show that our weak learning assumption on $\mathcal{H}$ is both necessary and sufficient for multicalibration with respect to $\mathcal{H}$ to imply Bayes optimality, answering an open question. We also show that if $\mathcal{H}$ satisfies our weak learning condition relative to another class $\mathcal{C}$ then multicalibration with respect to $\mathcal{H}$ implies multicalibration with respect to $\mathcal{C}$. Finally we investigate the empirical performance of our algorithm experimentally.

Author Information

Ira Globus-Harris (University of Pennsylvania)
Declan Harrison
Michael Kearns (University of Pennsylvania)
Aaron Roth (University of Pennsylvania)
Jessica Sorrell (UPenn)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors