Spectral Bridge Variational Inference: Dynamic LoRA via Bures-Wasserstein Gradient Flows
Abstract
Parameter-Efficient Fine-Tuning (PEFT) is essential for adapting Large Language Models, yet existing methods typically struggle to balance model capacity with computational efficiency. Standard approaches often enforce rigid low-rank constraints, while dynamic alternatives incur significant memory overheads. To resolve this dilemma, we propose Spectral Bridge Variational Inference (SBVI), a geometric framework that reformulates LoRA not as static parameter optimization, but as a continuous Wasserstein gradient flow on the manifold of Gaussian measures. Rather than fixing the rank at initialization, SBVI governs the singular value evolution via a stochastic differential equation driven by a thermodynamic competition between task gradients and adaptive entropic friction. This mechanism induces a spectral bifurcation that automatically prunes redundant noise modes while amplifying signal-rich components, naturally discovering a layer-wise optimal rank distribution. We derive a scalable algorithm with linear complexity using factorized Riemannian retractions and an Empirical Bayes friction update. Experiments on reasoning and coding benchmarks demonstrate that SBVI achieves state-of-the-art performance, offering superior accuracy and memory efficiency compared to existing static and dynamic adaptation methods.