Steady-State Behavior of Constant-Stepsize Stochastic Approximation: Gaussian Approximation and Tail Bounds
Yuyang Wang ⋅ Felix Wang ⋅ Zedong Wang ⋅ Ijay Narang ⋅ Yuzhou Wang ⋅ Siva Maguluri
Abstract
Constant-stepsize stochastic approximation is widely used in learning for computational efficiency. For a fixed stepsize, the iterates typically admit a stationary distribution that is rarely tractable. Prior work shows that as the stepsize $\alpha \downarrow 0$, the centered-and-scaled steady state converges weakly to a Gaussian limit. However, for fixed $\alpha$, this weak convergence offers no usable error bound for approximating the steady-state by its Gaussian limit. This paper provides explicit, non-asymptotic error bounds for fixed $\alpha$. We study (i) stochastic gradient descent on smooth strongly convex objectives, (ii) linear SA, and (iii) contractive nonlinear SA, and we treat both i.i.d. and Markovian noise models to ensure broad applicability. Our main results first give dimension- and stepsize-dependent, explicit bounds in Wasserstein distance between the centered-scaled steady state and its Gaussian limit, with errors that vanish as $\alpha \downarrow 0$. We further derive sharp tail control, comparing the steady-state tail probability to Gaussian tails with an explicit error term that decays in both the deviation level and $\alpha$. Our analysis combines steady-state Stein's method with moment bounds on the SA iterations, and uses Poisson equation techniques to manage temporal dependence in the Markovian noise setting. We adapt the same toolkit to SGD with general convex objectives and suggest non-Gaussian limiting behavior, which is validated in numerical experiments.
Successful Page Load