Depth-Progressive Monotonic Learning without Backpropagation
Abstract
Backpropagation (BP) remains the dominant training paradigm for deep neural networks, yet its reliance on global gradient propagation fundamentally induces update locking problem, enforcing strong inter-layer dependencies in parameter updates. To address this limitation, we propose Depth-progressive Monotonic Learning (DMoL), a training scheme that assigns layer-wise local belief objectives and incrementally refines them across network depth, enabling unlocked parameter updates. As a result, DMoL supports dynamic modification of network depth during training, adapting to available compute and device resources while maintaining stable optimization. We provide theoretical guarantees that layer-wise local belief objectives improve monotonically with increasing depth and converge exponentially. Empirically, DMoL consistently matches or outperforms BP across diverse tasks, yielding a 4.3\% accuracy gain on CIFAR-100, mitigating over-smoothing in deep graph neural networks (+37.5\% on Cora), and reducing the final loss by over 35\% in diffusion model training, highlighting its robustness and flexibility as an alternative to BP. The code is publicly available at: https://anonymous.4open.science/r/DMoL.