Singularity-aware Optimization via Randomized Geometric Probing: Towards Stable Non-smooth Optimization
Ruoran Xu ⋅ Borong She ⋅ Xiaobo Jin ⋅ Qiufeng Wang
Abstract
Deep learning optimization relies heavily on the assumption of smooth loss landscapes, a condition systematically violated by modern architectures due to non-smooth components like ReLU activations and quantization operators. In such non-smooth regimes, adaptive optimizers such as Adam suffer from gradient chattering—violent oscillations caused by conflicting signals within the Clarke subdifferential—leading to poor convergence and suboptimal generalization. To address this, we introduce Singularity-aware Adam (S-Adam), a novel optimizer that stabilizes training by dynamically modulating step sizes based on local geometric instability. Our key contribution is the Local Geometric Instability (LGI) metric, a computationally efficient estimator of the Clarke subdifferential diameter derived from the variance of randomized directional derivatives. S-Adam incorporates an adaptive damping mechanism $\exp(-\lambda \rho_t)$ that decelerates updates in high-instability regions while preserving fast convergence in smooth basins. We provide a rigorous convergence analysis using differential inclusions, proving that S-Adam converges almost surely to $(\delta, \epsilon)$-Clarke stationary points at the optimal $\mathcal{O}(1/\sqrt{T})$ rate. Empirical evaluations on Quantization-Aware Training (QAT) and high-volatility transfer learning demonstrate that S-Adam consistently outperforms AdamW and Prox-SGD, achieving accuracy gains of up to +6\% on CIFAR-100 and +3\% on TinyImageNet while effectively mitigating gradient oscillations.
Successful Page Load