Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Localized Learning: Decentralized Model Updates via Non-Global Objectives

Emergent learning that outperforms global objectives

Timoleon (Timos) Moraitis


Abstract:

Learning algorithms are often top-down and prescriptive, directly descending the gradient of a prescribed loss function. This includes backpropagation, its more localized approximations such as Equilibrium Propagation or Predictive Coding, as well as local self-supervised objectives, as in the Forward-Forward algorithm. Other algorithms could instead be characterized as emergent or descriptive, where network-wide function is learned from the bottom up, from mere descriptions of processes in synapses (i.e. connections) and neuronal units. This latter type of learning, which results e.g. from so-called Hebbian plastic synapses, spike timing-dependent plasticity (STDP), and short-term plasticity, fully satisfies the constraints of biological and neuromorphic circuitry, because neuroscience textbook mechanisms local to each synapse are the entire seeding premise. However, such emergent learning rules have struggled to be useful in difficult tasks for modern machine learning standards. In contrast, our recent work shows that learning resulting from plasticity is applicable to previously unattainable problem settings and can even outperform global loss-driven networks under certain conditions. Specifically, the talk will focus on short-term STDP, short-term plasticity neurons (STPN), SoftHebb, i.e. our version of Hebbian learning in circuits with soft competition, and on their advantages in sequence modelling, adversarial robustness, learning speed, and unsupervised deep learning. The picture will be completed with a mention of our related works on neuromorphic nanodevices that emulate the biophysics of plastic synapses through the physics of analog electronics and photonics.

Chat is not available.