Variance-Reduced Zeroth-Order Langevin Dynamics for Non-Log-Concave Black-Box Sampling and Inverse Problems
M. Sahin ⋅ Behzad Sharif ⋅ Abolfazl Hashemi
Abstract
Sampling from high-dimensional, non-log-concave distributions with unnormalized densities constitutes a fundamental challenge in machine learning, particularly when gradient information is inaccessible or computationally prohibitive. While Langevin dynamics provides a robust mechanism for gradient-based sampling, its extension to the derivative-free setting is frequently compromised by high variance and a lack of rigorous convergence guarantees in non-convex landscapes. In this work, we propose a principled variance-reduced zeroth-order Langevin dynamics framework that addresses these limitations for both general non-log-concave black-box sampling and inverse problems utilizing pre-trained score-based generative priors. We introduce a novel gradient estimator that significantly mitigates the variance inherent in traditional zeroth-order methods, enabling stable navigation of complex, multimodal posterior distributions. Theoretically, we establish the first non-asymptotic complexity bounds for this class of algorithms, proving convergence to the target distribution in terms of $\varepsilon$-relative Fisher information, and, under a Poincare inequality, squared total variation distance, specifically for non-log-concave densities. We empirically validate our framework, demonstrating superior mixing and sampling accuracy on standard black-box benchmarks and establishing state-of-the-art performance for derivative-free linear and nonlinear inverse problems.
Successful Page Load