Skip to yearly menu bar Skip to main content


Lagrangian Method for Q-Function Learning (with Applications to Machine Translation)

Huang Bojun

Hall E #922

Keywords: [ RL: Everything Else ] [ T: Everything Else ] [ RL: Deep RL ] [ T: Learning Theory ] [ RL: Total Cost/Reward ] [ T: Reinforcement Learning and Planning ] [ APP: Language, Speech and Dialog ] [ RL: Batch/Offline ] [ Reinforcement Learning ]


This paper discusses a new approach to the fundamental problem of learning optimal Q-functions. In this approach, optimal Q-functions are formulated as saddle points of a nonlinear Lagrangian function derived from the classic Bellman optimality equation. The paper shows that the Lagrangian enjoys strong duality, in spite of its nonlinearity, which paves the way to a general Lagrangian method to Q-function learning. As a demonstration, the paper develops an imitation learning algorithm based on the duality theory, and applies the algorithm to a state-of-the-art machine translation benchmark. The paper then turns to demonstrate a symmetry breaking phenomenon regarding the optimality of the Lagrangian saddle points, which justifies a largely overlooked direction in developing the Lagrangian method.

Chat is not available.