Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Advancing Neural Network Training : Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024)

Liouna: Biologically Plausible Learning for Efficient Pre-Training of Transferrable Deep Models

Fady Rezk · Antreas Antoniou · Henry Gouk · Timothy Hospedales


Abstract:

Biologically plausible learning algorithms, inspired by the inherent constraints of biological neural systems, offer a promising path towards communication and memory-efficient learning with extreme parallelizability where layers learning is decoupled to train in parallel. In this work, we introduce Liouna (Arabic for "plasticity"), an unsupervised biologically plausible local learning algorithm inspired by predictive coding and masked image modelling. We derive Liouna's update rule, which elegantly reduces to a simple Hebbian rule with subtractive inhibition. We establish new state-of-the-art results for local learning rules across CIFAR-10, CIFAR-100, STL-10, and Imagenette, without imposing training procedures that hinder the attainability of the true benefits of local learning. Remarkably, we discover and demonstrate an emergent behaviour in Liouna, where it learns inter-class similarity and separability through feature sharing and specialization, despite observing no labels during training. Notably, we are the first to study the transfer performance of local learning algorithms. By pre-training on unlabelled data, Liouna outperforms previous state-of-the-art methods on 6 out of 8 downstream tasks and even surpasses end-to-end (E2E) supervised training in the low compute regime. Liouna also demonstrates competitive performance with SimCLR pre-trained models in the resource-limited pre-training scenario. This highlights Liouna's potential for efficient transfer learning and/or acceleration of the initial stages of pre-training improving its convergence rates in wall-clock time.

Chat is not available.