Timezone: »
We present a reinforcement learning framework, called Programmatically Interpretable Reinforcement Learning (PIRL), that is designed to generate interpretable and verifiable agent policies. Unlike the popular Deep Reinforcement Learning (DRL) paradigm, which represents policies by neural networks, PIRL represents policies using a high-level, domain-specific programming language. Such programmatic policies have the benefits of being more easily interpreted than neural networks, and being amenable to verification by symbolic methods. We propose a new method, called Neurally Directed Program Search (NDPS), for solving the challenging nonsmooth optimization problem of finding a programmatic policy with maximal reward. NDPS works by first learning a neural policy network using DRL, and then performing a local search over programmatic policies that seeks to minimize a distance from this neural “oracle”. We evaluate NDPS on the task of learning to drive a simulated car in the TORCS car-racing environment. We demonstrate that NDPS is able to discover human-readable policies that pass some significant performance bars. We also show that PIRL policies can have smoother trajectories, and can be more easily transferred to environments not encountered during training, than corresponding policies discovered by DRL.
Author Information
Abhinav Verma (Rice University)
Vijayaraghavan Murali (Rice University)
Rishabh Singh (Google Brain)
Pushmeet Kohli (DeepMind)
Swarat Chaudhuri (Rice University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Programmatically Interpretable Reinforcement Learning »
Wed. Jul 11th 04:15 -- 07:00 PM Room Hall B #33
More from the Same Authors
-
2023 Poster: Measuring the Impact of Programming Language Distribution »
Gabriel Orlanski · Kefan Xiao · Xavier Garcia · Jeffrey Hui · Joshua Howland · Jonathan Malmaud · Jacob Austin · Rishabh Singh · Michele Catasta -
2019 Poster: Control Regularization for Reduced Variance Reinforcement Learning »
Richard Cheng · Abhinav Verma · Gabor Orosz · Swarat Chaudhuri · Yisong Yue · Joel Burdick -
2019 Poster: CompILE: Compositional Imitation Learning and Execution »
Thomas Kipf · Yujia Li · Hanjun Dai · Vinicius Zambaldi · Alvaro Sanchez-Gonzalez · Edward Grefenstette · Pushmeet Kohli · Peter Battaglia -
2019 Poster: Structured agents for physical construction »
Victor Bapst · Alvaro Sanchez-Gonzalez · Carl Doersch · Kimberly Stachenfeld · Pushmeet Kohli · Peter Battaglia · Jessica Hamrick -
2019 Oral: CompILE: Compositional Imitation Learning and Execution »
Thomas Kipf · Yujia Li · Hanjun Dai · Vinicius Zambaldi · Alvaro Sanchez-Gonzalez · Edward Grefenstette · Pushmeet Kohli · Peter Battaglia -
2019 Oral: Control Regularization for Reduced Variance Reinforcement Learning »
Richard Cheng · Abhinav Verma · Gabor Orosz · Swarat Chaudhuri · Yisong Yue · Joel Burdick -
2019 Oral: Structured agents for physical construction »
Victor Bapst · Alvaro Sanchez-Gonzalez · Carl Doersch · Kimberly Stachenfeld · Pushmeet Kohli · Peter Battaglia · Jessica Hamrick -
2019 Poster: Graph Matching Networks for Learning the Similarity of Graph Structured Objects »
Yujia Li · Chenjie Gu · Thomas Dullien · Oriol Vinyals · Pushmeet Kohli -
2019 Oral: Graph Matching Networks for Learning the Similarity of Graph Structured Objects »
Yujia Li · Chenjie Gu · Thomas Dullien · Oriol Vinyals · Pushmeet Kohli -
2018 Poster: Adversarial Risk and the Dangers of Evaluating Against Weak Attacks »
Jonathan Uesato · Brendan O'Donoghue · Pushmeet Kohli · Aäron van den Oord -
2018 Oral: Adversarial Risk and the Dangers of Evaluating Against Weak Attacks »
Jonathan Uesato · Brendan O'Donoghue · Pushmeet Kohli · Aäron van den Oord