Poster
The information-theoretic value of unlabeled data in semi-supervised learning
Alexander Golovnev · David Pal · Balazs Szorenyi
Pacific Ballroom #175
Keywords: [ Semi-supervised learning ] [ Statistical Learning Theory ] [ Supervised Learning ] [ Unsupervised and Semi-supervised Learning ]
[
Abstract
]
Abstract:
We quantify the separation between the numbers of labeled examples required to
learn in two settings: Settings with and without the knowledge of
the distribution of the unlabeled data. More specifically, we prove a separation
by $\Theta(\log n)$ multiplicative factor for the class of projections over
the Boolean hypercube of dimension $n$. We prove that there is no separation
for the class of all functions on domain of any size. Learning with the knowledge of the distribution (a.k.a. fixed-distribution
learning) can be viewed as an idealized scenario of semi-supervised learning
where the number of unlabeled data points is so great that the unlabeled
distribution is known exactly. For this reason, we call the separation the
value of unlabeled data.
Live content is unavailable. Log in and register to view live content