Skip to yearly menu bar Skip to main content


Spotlight

Nearly Optimal Catoni’s M-estimator for Infinite Variance

Sujay Bhatt · Guanhua Fang · Ping Li · Gennady Samorodnitsky

Ballroom 3 & 4

Abstract: In this paper, we extend the remarkable M-estimator of Catoni~\citep{Cat12} to situations where the variance is infinite. In particular, given a sequence of i.i.d random variables~$\{X_i\}_{i=1}^n$ from distribution~$\mathcal{D}$ over~$\mathbb{R}$ with mean~$\mu$, we only assume the existence of a known upper bound~$\upsilon_{\varepsilon} > 0$ on the~$(1+\varepsilon)^{th}$ central moment of the random variables, namely, for~$\varepsilon \in (0,1]$ \[ \mathbb{E}_{X_1 \sim \mathcal{D}} \Big| X_1 - \mu \Big|^{1+\varepsilon} \leq \upsilon_{\varepsilon}. \] The extension is non-trivial owing to the difficulty in characterizing the roots of certain polynomials of degree smaller than~$2$. The proposed estimator has the same order of magnitude and the same asymptotic constant as in~\citet{Cat12}, but for the case of bounded moments. We further propose a version of the estimator that does not require even the knowledge of~$\upsilon_{\varepsilon}$, but adapts the moment bound in a data-driven manner. Finally, to illustrate the usefulness of the derived non-asymptotic confidence bounds, we consider an application in multi-armed bandits and propose best arm identification algorithms, in the fixed confidence setting, that outperform the state of the art.

Chat is not available.