Timezone: »
We consider the problem of learning from sparse and underspecified rewards. This task structure arises in interpretation problems where an agent receives a complex input, such as a natural language command, and needs to generate a complex response, such as an action sequence, but only receives binary success-failure feedback. Rewards of this kind are usually underspecified because they do not distinguish between purposeful and accidental success. To learn in these scenarios, effective exploration is critical to find successful trajectories, but generalization also depends on discounting spurious trajectories that achieve accidental success. We address exploration by using a mode covering direction of KL divergence to collect a diverse set of successful trajectories, followed by a mode seeking KL divergence to train a robust policy. We address reward underspecification by using Meta-Learning and Bayesian Optimization to construct an auxiliary reward function, which provides more accurate feedback for learning. The parameters of the auxiliary reward function are optimized with respect to the validation performance of the trained policy. Without using expert demonstrations, ground truth programs, our Meta Reward-Learning (MeRL) achieves state-of-the-art results on weakly-supervised semantic parsing, improving upon prior work by 1.3% and 2.6% on WikiTablesQuestions and WikiSQL.
Author Information
Rishabh Agarwal (Google Research, Brain Team)
Chen Liang (Google Brain)
Dale Schuurmans (Google / University of Alberta)
Mohammad Norouzi (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Learning to Generalize from Sparse and Underspecified Rewards »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom #49
More from the Same Authors
-
2021 : Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Aaron Courville · Tengyu Ma · George Tucker · Sergey Levine -
2021 : Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation »
Evgenii Nikishin · Romina Abachi · Rishabh Agarwal · Pierre-Luc Bacon -
2021 : Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Aaron Courville · Tengyu Ma · George Tucker · Sergey Levine -
2021 : Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Aaron Courville · Tengyu Ma · George Tucker · Sergey Levine -
2023 : Suboptimal Data Can Bottleneck Scaling »
Jacob Buckman · Kshitij Gupta · Ethan Caballero · Rishabh Agarwal · Marc Bellemare -
2022 Poster: Making Linear MDPs Practical via Contrastive Representation Learning »
Tianjun Zhang · Tongzheng Ren · Mengjiao Yang · Joseph E Gonzalez · Dale Schuurmans · Bo Dai -
2022 Poster: A Parametric Class of Approximate Gradient Updates for Policy Optimization »
Ramki Gummadi · Saurabh Kumar · Junfeng Wen · Dale Schuurmans -
2022 Spotlight: A Parametric Class of Approximate Gradient Updates for Policy Optimization »
Ramki Gummadi · Saurabh Kumar · Junfeng Wen · Dale Schuurmans -
2022 Spotlight: Making Linear MDPs Practical via Contrastive Representation Learning »
Tianjun Zhang · Tongzheng Ren · Mengjiao Yang · Joseph E Gonzalez · Dale Schuurmans · Bo Dai -
2022 Poster: Marginal Distribution Adaptation for Discrete Sets via Module-Oriented Divergence Minimization »
Hanjun Dai · Mengjiao Yang · Yuan Xue · Dale Schuurmans · Bo Dai -
2022 Spotlight: Marginal Distribution Adaptation for Discrete Sets via Module-Oriented Divergence Minimization »
Hanjun Dai · Mengjiao Yang · Yuan Xue · Dale Schuurmans · Bo Dai -
2021 : Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Aaron Courville · Tengyu Ma · George Tucker · Sergey Levine -
2021 Social: RL Social »
Dibya Ghosh · Hager Radi · Derek Li · Alex Ayoub · Erfan Miahi · Rishabh Agarwal · Charline Le Lan · Abhishek Naik · John D. Martin · Shruti Mishra · Adrien Ali Taiga -
2021 Poster: LEGO: Latent Execution-Guided Reasoning for Multi-Hop Question Answering on Knowledge Graphs »
Hongyu Ren · Hanjun Dai · Bo Dai · Xinyun Chen · Michihiro Yasunaga · Haitian Sun · Dale Schuurmans · Jure Leskovec · Denny Zhou -
2021 Spotlight: LEGO: Latent Execution-Guided Reasoning for Multi-Hop Question Answering on Knowledge Graphs »
Hongyu Ren · Hanjun Dai · Bo Dai · Xinyun Chen · Michihiro Yasunaga · Haitian Sun · Dale Schuurmans · Jure Leskovec · Denny Zhou -
2021 Poster: EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL »
Seyed Kamyar Seyed Ghasemipour · Dale Schuurmans · Shixiang Gu -
2021 Poster: On the Optimality of Batch Policy Optimization Algorithms »
Chenjun Xiao · Yifan Wu · Jincheng Mei · Bo Dai · Tor Lattimore · Lihong Li · Csaba Szepesvari · Dale Schuurmans -
2021 Spotlight: EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL »
Seyed Kamyar Seyed Ghasemipour · Dale Schuurmans · Shixiang Gu -
2021 Spotlight: On the Optimality of Batch Policy Optimization Algorithms »
Chenjun Xiao · Yifan Wu · Jincheng Mei · Bo Dai · Tor Lattimore · Lihong Li · Csaba Szepesvari · Dale Schuurmans -
2020 Poster: Energy-Based Processes for Exchangeable Data »
Mengjiao Yang · Bo Dai · Hanjun Dai · Dale Schuurmans -
2020 Poster: ConQUR: Mitigating Delusional Bias in Deep Q-Learning »
DiJia Su · Jayden Ooi · Tyler Lu · Dale Schuurmans · Craig Boutilier -
2020 Poster: Go Wide, Then Narrow: Efficient Training of Deep Thin Networks »
Denny Zhou · Mao Ye · Chen Chen · Tianjian Meng · Mingxing Tan · Xiaodan Song · Quoc Le · Qiang Liu · Dale Schuurmans -
2020 Poster: Revisiting Fundamentals of Experience Replay »
William Fedus · Prajit Ramachandran · Rishabh Agarwal · Yoshua Bengio · Hugo Larochelle · Mark Rowland · Will Dabney -
2020 Poster: Imputer: Sequence Modelling via Imputation and Dynamic Programming »
William Chan · Chitwan Saharia · Geoffrey Hinton · Mohammad Norouzi · Navdeep Jaitly -
2020 Poster: An Optimistic Perspective on Offline Deep Reinforcement Learning »
Rishabh Agarwal · Dale Schuurmans · Mohammad Norouzi -
2020 Poster: Scalable Deep Generative Modeling for Sparse Graphs »
Hanjun Dai · Azade Nova · Yujia Li · Bo Dai · Dale Schuurmans -
2020 Poster: AutoML-Zero: Evolving Machine Learning Algorithms From Scratch »
Esteban Real · Chen Liang · David So · Quoc Le -
2020 Poster: A Simple Framework for Contrastive Learning of Visual Representations »
Ting Chen · Simon Kornblith · Mohammad Norouzi · Geoffrey Hinton -
2019 Poster: Similarity of Neural Network Representations Revisited »
Simon Kornblith · Mohammad Norouzi · Honglak Lee · Geoffrey Hinton -
2019 Oral: Similarity of Neural Network Representations Revisited »
Simon Kornblith · Mohammad Norouzi · Honglak Lee · Geoffrey Hinton -
2019 Poster: Understanding the Impact of Entropy on Policy Optimization »
Zafarali Ahmed · Nicolas Le Roux · Mohammad Norouzi · Dale Schuurmans -
2019 Oral: Understanding the Impact of Entropy on Policy Optimization »
Zafarali Ahmed · Nicolas Le Roux · Mohammad Norouzi · Dale Schuurmans -
2019 Poster: The Value Function Polytope in Reinforcement Learning »
Robert Dadashi · Marc Bellemare · Adrien Ali Taiga · Nicolas Le Roux · Dale Schuurmans -
2019 Poster: The Evolved Transformer »
David So · Quoc Le · Chen Liang -
2019 Oral: The Value Function Polytope in Reinforcement Learning »
Robert Dadashi · Marc Bellemare · Adrien Ali Taiga · Nicolas Le Roux · Dale Schuurmans -
2019 Oral: The Evolved Transformer »
David So · Quoc Le · Chen Liang -
2018 Poster: Smoothed Action Value Functions for Learning Gaussian Policies »
Ofir Nachum · Mohammad Norouzi · George Tucker · Dale Schuurmans -
2018 Oral: Smoothed Action Value Functions for Learning Gaussian Policies »
Ofir Nachum · Mohammad Norouzi · George Tucker · Dale Schuurmans -
2017 Poster: Deep Value Networks Learn to Evaluate and Iteratively Refine Structured Outputs »
Michael Gygli · Mohammad Norouzi · Anelia Angelova -
2017 Poster: Device Placement Optimization with Reinforcement Learning »
Azalia Mirhoseini · Hieu Pham · Quoc Le · benoit steiner · Mohammad Norouzi · Rasmus Larsen · Yuefeng Zhou · Naveen Kumar · Samy Bengio · Jeff Dean -
2017 Talk: Deep Value Networks Learn to Evaluate and Iteratively Refine Structured Outputs »
Michael Gygli · Mohammad Norouzi · Anelia Angelova -
2017 Talk: Device Placement Optimization with Reinforcement Learning »
Azalia Mirhoseini · Hieu Pham · Quoc Le · benoit steiner · Mohammad Norouzi · Rasmus Larsen · Yuefeng Zhou · Naveen Kumar · Samy Bengio · Jeff Dean -
2017 Poster: Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders »
Cinjon Resnick · Adam Roberts · Jesse Engel · Douglas Eck · Sander Dieleman · Karen Simonyan · Mohammad Norouzi -
2017 Talk: Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders »
Cinjon Resnick · Adam Roberts · Jesse Engel · Douglas Eck · Sander Dieleman · Karen Simonyan · Mohammad Norouzi