Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Continuous Time Perspectives in Machine Learning

Principle of Least Action Approach to Accelerate Neural Ordinary Differential Equations

srinivas anumasa · Srijith Prabhakaran nair kusumam


Abstract:

Neural ordinary differential equations(NODE) generalize discrete ResNet models by continuously transforming the hidden representations. NODE treats the computation of hidden states as computing the trajectory of an ordinary differential equation(ODE) parameterized by a neural network, which is expensive in terms of number of function evaluations. In this work, we propose a regularisation technique to decrease the number of function evaluations which is built on the framework of principle of least action (PLA) . In dynamics, the path chosen by an object to move from from one point to another is such that the action is minimum. Action is defined as the integral of the Lagrangian along the path. In our proposed approach, the trajectory computed by the NODE is controlled by a regularizer will be analogues to minimizing the action. We experimentally show that our proposed regularizer indeed requires less number of function evaluations.

Chat is not available.