Score-Repellent Monte Carlo: Toward Efficient Non-Markovian Sampler with Constant Memory in General State Spaces
Jie Hu ⋅ Lingyun Chen ⋅ Geeho Kim ⋅ Jinyoung Choi ⋅ Bohyung Han ⋅ Do-Young Eun
Abstract
History-dependent sampling can reduce long-run Monte Carlo variance by discouraging redundant revisits, but existing schemes typically encode history through empirical measure on finite state spaces, which is infeasible in high-dimensional discrete configuration spaces or ill-posed in continuous domains. We propose *Score-Repellent Monte Carlo* (SRMC) framework that summarizes trajectory history by a fixed, $d$-dimensional running average of score evaluations and converts it into a history-dependent surrogate target via an exponential *score tilt*. The resulting surrogate family is normalization-free in the standard MCMC sense, yielding a generic wrapper: at each iteration, any standard base kernel designed for the target $\pi$ can be run on the current surrogate $\pi_{\theta_n}$ while updating the history online. We analyze the coupled evolution of any estimator and the history recursion using stochastic approximation with controlled Markovian noise, establishing almost sure convergence and a joint central limit theorem. We identify regimes where the asymptotic covariance decreases as the repellence strength $\alpha$ increases, exhibiting a scaling of $O(1/\alpha)$, reproducing near-zero variance effect but now on general state spaces with constant memory. Empirical results across continuous targets and discrete energy-based models demonstrate that SRMC delivers notable improvements in estimator variance and in effectively covering modes in Gaussian mixtures, all while retaining $O(d)$ memory usage and minimal overhead per iteration.
Successful Page Load