Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Complex feedback in online learning

ActiveHedge: Hedge meets Active Learning

Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama


Abstract: We consider the classical problem of multi-class prediction with expert advice, but with an active learning twist. In this new setting the learner will only query the labels of a small number of examples, but still aims to minimize regret to the best expert as usual; the learner is also allowed a very short ``burn-in'' phase where it can fast-forward and query certain highly-informative examples. We design an algorithm that utilizes Hedge (aka Exponential Weights) as a subroutine, and we show that under a very particular combinatorial constraint on the matrix of expert predictions we can obtain a very strong regret guarantee while querying very few labels. This constraint, which we refer to as $\zeta$-compactness, or just compactness, can be viewed as a non-stochastic variant of the disagreement coefficient, another popular parameter used to reason about the sample complexity of active learning in the IID setting. We also give a polynomial time algorithm to calculate the $\zeta$-compactness of a matrix up to an approximation factor of 3.

Chat is not available.