Oral
Learning to Generalize from Sparse and Underspecified Rewards
Rishabh Agarwal · Chen Liang · Dale Schuurmans · Mohammad Norouzi

Thu Jun 13th 12:10 -- 12:15 PM @ Hall B

We consider the problem of learning from sparse and underspecified rewards. This task structure arises in interpretation problems where an agent receives a complex input, such as a natural language command, and needs to generate a complex response, such as an action sequence, but only receives binary success-failure feedback. Rewards of this kind are usually underspecified because they do not distinguish between purposeful and accidental success. To learn in these scenarios, effective exploration is critical to find successful trajectories, but generalization also depends on discounting spurious trajectories that achieve accidental success. We address exploration by using a mode covering direction of KL divergence to collect a diverse set of successful trajectories, followed by a mode seeking KL divergence to train a robust policy. We address reward underspecification by using Meta-Learning and Bayesian Optimization to construct an auxiliary reward function, which provides more accurate feedback for learning. The parameters of the auxiliary reward function are optimized with respect to the validation performance of the trained policy. Without using expert demonstrations, ground truth programs, our Meta Reward-Learning (MeRL) achieves state-of-the-art results on weakly-supervised semantic parsing, improving upon prior work by 1.3% and 2.6% on WikiTablesQuestions and WikiSQL.

Author Information

Rishabh Agarwal (Google Research, Brain Team)
Chen Liang (Google Brain)
Dale Schuurmans (Google / University of Alberta)
Mohammad Norouzi (Google Brain)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors