Maximum Likelihood Estimation for Learning Populations of Parameters
Ramya Korlakai Vinayak · Weihao Kong · Gregory Valiant · Sham Kakade

Tue Jun 11th 12:15 -- 12:20 PM @ Room 103

Consider a setting with $N$ independent individuals, each with an unknown parameter, $pi \in [0, 1]$ drawn from some unknown distribution $P^\star$. After observing the outcomes of $t$ independent Bernoulli trials, i.e., $Xi \sim \text{Binomial}(t, p_i)$ per individual, our objective is to accurately estimate $P^\star$ in the sparse regime, namely when $t \ll N$. This problem arises in numerous domains, including the social sciences, psychology, health-care, and biology, where the size of the population under study is usually large yet the number of observations per individual is often limited.

Our main result shows that, in this sparse regime where $t \ll N$, the maximum likelihood estimator (MLE) is both statistically minimax optimal and efficiently computable. Precisely, for sufficiently large $N$, the MLE achieves the information theoretic optimal error bound of $\mathcal{O}(\frac{1}{t})$ for $t < c\log{N}$, with regards to the earth mover's distance (between the estimated and true distributions). More generally, in an exponentially large interval of $t$ beyond $c \log{N}$, the MLE achieves the minimax error bound of $\mathcal{O}(\frac{1}{\sqrt{t\log N}})$. In contrast, regardless of how large $N$ is, the naive "plug-in" estimator for this problem only achieves the sub-optimal error of $\Theta(\frac{1}{\sqrt{t}})$. Empirically, we also demonstrate the MLE performs well on both synthetic as well as real datasets.

Author Information

Ramya Korlakai Vinayak (University of Washington)
Weihao Kong (Stanford University)
Gregory Valiant (Stanford University)
Sham Kakade (University of Washington)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors