Skip to yearly menu bar Skip to main content


Poster

Doubly-Competitive Distribution Estimation

Yi Hao · Alon Orlitsky

Pacific Ballroom #166

Keywords: [ Statistical Learning Theory ] [ Information Theory and Estimation ]


Abstract:

Distribution estimation is a statistical-learning cornerstone. Its classical \emph{min-max} formulation minimizes the estimation error for the worst distribution, hence under-performs for practical distributions that, like power-law, are often rather simple. Modern research has therefore focused on two frameworks: \emph{structural} estimation that improves learning accuracy by assuming a simple structure of the underlying distribution; and \emph{competitive}, or \emph{instance-optimal}, estimation that achieves the performance of a genie aided estimator up to a small excess error that vanishes as the sample size grows, regardless of the distribution. This paper combines and strengthens the two frameworks. It designs a single estimator whose excess error vanishes both at a universal rate as the sample size grows, as well as when the (unknown) distribution gets simpler. We show that the resulting algorithm significantly improves the performance guarantees for numerous competitive- and structural-estimation results. The algorithm runs in near-linear time and is robust to model misspecification and domain-symbol permutations.

Live content is unavailable. Log in and register to view live content