Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Localized Learning: Decentralized Model Updates via Non-Global Objectives

Understanding Predictive Coding as a Second-Order Trust-Region Method

Francesco Innocenti · Ryan Singh · Christopher Buckley

Keywords: [ Second-Order ] [ Saddles ] [ Local Learning ] [ Trust Region ] [ Inference Learning ] [ Fisher Information ] [ Predictive Coding ] [ Backpropagation ]


Abstract:

Predictive coding (PC) is a brain-inspired local learning algorithm that has recently been suggested to provide advantages over backpropagation (BP) in biologically relevant scenarios. While theoretical work has mainly focused on the conditions under which PC can approximate or equal BP, how PC in its "natural regime" differs from BP is less understood. Here we develop a theory of PC as an adaptive trust-region (TR) method that uses second-order information. We show that the weight update of PC can be interpreted as shifting BP's loss gradient towards a TR direction found by the PC inference dynamics. Our theory suggests that PC should escape saddle points faster than BP, a prediction which we prove in a shallow linear model and support with experiments on deep networks. This work lays a theoretical foundation for understanding other suggested benefits of PC.

Chat is not available.