Skip to yearly menu bar Skip to main content


Poster

Debiased Distribution Compression

Lingxiao Li · Raaz Dwivedi · Lester Mackey


Abstract: Modern compression methods can summarize a target distribution $\mathbb{P}$ more succinctly than i.i.d. sampling but require access to a low-bias input sequence like a Markov chain converging quickly to $\mathbb{P}$. We introduce a new suite of compression methods suitable for compression with biased input sequences. Given $n$ points targeting the wrong distribution and quadratic time, Stein kernel thinning (SKT) returns $\sqrt{n}$ equal-weighted points with $\widetilde{O}(n^{-1/2})$ maximum mean discrepancy (MMD) to $\mathbb{P}$. For larger-scale compression tasks, low-rank SKT achieves the same feat in sub-quadratic time using an adaptive low-rank debiasing procedure that may be of independent interest. For downstream tasks that support convex or constant-preserving weights, Stein recombination and Stein Cholesky achieve even greater parsimony, matching the guarantees of SKT with as few as $\textup{poly-log}(n)$ weighted points. Underlying these advances are new guarantees for the quality of convex-weighted coresets, the spectral decay of kernel matrices, and the covering numbers of Stein kernel Hilbert spaces. We complement these results with diverse posterior compression experiments for overcoming biases due to burn-in, approximate Markov chain Monte Carlo, and tempering.

Live content is unavailable. Log in and register to view live content