Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Hardware-aware efficient training (HAET)

Efficient Training of Deep Equilibrium Models

Bac Nguyen · Lukas Mauch


Abstract:

Deep equilibrium models (DEQs) have proven to be very powerful for learning data representations. The idea is to replace traditional (explicit) feedforward neural networks with an implicit fixed-point equation, which allows to decouple the forward and backward passes. In particular, training DEQ layers becomes very memory-efficient via the implicit function theorem. However, backpropagation through DEQ layers still requires solving an expensive Jacobian-based equation. In this paper, we introduce a simple but effective strategy to avoid this computational burden. Our method relies on the Jacobian approximation of Broyden’s method after the forward pass to compute the gradients during the backward pass. Experiments show that simply re-using this approximation cansignificantly speed up the training while not causing any performance degradation.

Chat is not available.