Timezone: »

Dynamic Game Theoretic Neural Optimizer
Guan-Horng Liu · Tianrong Chen · Evangelos Theodorou

Wed Jul 21 06:00 AM -- 06:20 AM (PDT) @

The connection between training deep neural networks (DNNs) and optimal control theory (OCT) has attracted considerable attention as a principled tool of algorithmic design. Despite few attempts being made, they have been limited to architectures where the layer propagation resembles a Markovian dynamical system. This casts doubts on their flexibility to modern networks that heavily rely on non-Markovian dependencies between layers (e.g. skip connections in residual networks). In this work, we propose a novel dynamic game perspective by viewing each layer as a player in a dynamic game characterized by the DNN itself. Through this lens, different classes of optimizers can be seen as matching different types of Nash equilibria, depending on the implicit information structure of each (p)layer. The resulting method, called Dynamic Game Theoretic Neural Optimizer (DGNOpt), not only generalizes OCT-inspired optimizers to richer network class; it also motivates a new training principle by solving a multi-player cooperative game. DGNOpt shows convergence improvements over existing methods on image classification datasets with residual and inception networks. Our work marries strengths from both OCT and game theory, paving ways to new algorithmic opportunities from robust optimal control and bandit-based optimization.

Author Information

Guan-Horng Liu (Georgia Institute of Technology)
Tianrong Chen (Georgia Institute of Technology)
Evangelos Theodorou (Georgia Tech)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors