Timezone: »
Reinforcement learning (RL) for continuous control typically employs distributions whose support covers the entire action space. In this work, we investigate the colloquially known phenomenon that trained agents often prefer actions at the boundaries of that space. We draw theoretical connections to the emergence of bang-bang behavior in optimal control, and provide extensive empirical evaluation across a variety of recent RL algorithms. We replace the normal Gaussian by a Bernoulli distribution that solely considers the extremes along each action dimension - a bang-bang controller. Surprisingly, this achieves state-of-the-art performance on several continuous control benchmarks - in contrast to robotic hardware, where energy and maintenance cost affect controller choices.To reduce the impact of exploration on our analysis, we provide additional imitation learning experiments. Finally, we show that our observations extend to environments that aim to model real-world challenges and evaluate factors to mitigate the emergence of bang-bang solutions.Our findings emphasise challenges for benchmarking continuous control algorithms, particularly in light of real-world applications.
Author Information
Tim Seyde (MIT)
Igor Gilitschenski (Massachusetts Institute of Technology)
Wilko Schwarting (Massachusetts Institute of Technology)
Bartolomeo Stellato (Princeton University)
Martin Riedmiller (DeepMind)
Markus Wulfmeier (DeepMind)
Daniela Rus (MIT CSAIL)
More from the Same Authors
-
2021 : RL + Robotics Panel »
George Konidaris · Jan Peters · Martin Riedmiller · Angela Schoellig · Rose Yu · Rupam Mahmood -
2021 : Invited Talk 2: Addressing Model Bias and Uncertainty via Evidential Deep Learning »
Daniela Rus -
2021 Poster: The Logical Options Framework »
Brandon Araki · Xiao Li · Kiran Vodrahalli · Jonathan DeCastro · Micah Fry · Daniela Rus -
2021 Poster: On-Off Center-Surround Receptive Fields for Accurate and Robust Image Classification »
Zahra Babaiee · Ramin Hasani · Mathias Lechner · Daniela Rus · Radu Grosu -
2021 Poster: Data-efficient Hindsight Off-policy Option Learning »
Markus Wulfmeier · Dushyant Rao · Roland Hafner · Thomas Lampe · Abbas Abdolmaleki · Tim Hertweck · Michael Neunert · Dhruva Tirumala Bukkapatnam · Noah Siegel · Nicolas Heess · Martin Riedmiller -
2021 Spotlight: On-Off Center-Surround Receptive Fields for Accurate and Robust Image Classification »
Zahra Babaiee · Ramin Hasani · Mathias Lechner · Daniela Rus · Radu Grosu -
2021 Oral: The Logical Options Framework »
Brandon Araki · Xiao Li · Kiran Vodrahalli · Jonathan DeCastro · Micah Fry · Daniela Rus -
2021 Spotlight: Data-efficient Hindsight Off-policy Option Learning »
Markus Wulfmeier · Dushyant Rao · Roland Hafner · Thomas Lampe · Abbas Abdolmaleki · Tim Hertweck · Michael Neunert · Dhruva Tirumala Bukkapatnam · Noah Siegel · Nicolas Heess · Martin Riedmiller -
2020 Poster: A Natural Lottery Ticket Winner: Reinforcement Learning with Ordinary Neural Circuits »
Ramin Hasani · Mathias Lechner · Alexander Amini · Daniela Rus · Radu Grosu -
2020 Poster: Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Control »
Jie Xu · Yunsheng Tian · Pingchuan Ma · Daniela Rus · Shinjiro Sueda · Wojciech Matusik -
2020 Poster: A distributional view on multi-objective policy optimization »
Abbas Abdolmaleki · Sandy Huang · Leonard Hasenclever · Michael Neunert · Francis Song · Martina Zambelli · Murilo Martins · Nicolas Heess · Raia Hadsell · Martin Riedmiller -
2018 Poster: Learning by Playing - Solving Sparse Reward Tasks from Scratch »
Martin Riedmiller · Roland Hafner · Thomas Lampe · Michael Neunert · Jonas Degrave · Tom Van de Wiele · Vlad Mnih · Nicolas Heess · Jost Springenberg -
2018 Poster: Graph Networks as Learnable Physics Engines for Inference and Control »
Alvaro Sanchez-Gonzalez · Nicolas Heess · Jost Springenberg · Josh Merel · Martin Riedmiller · Raia Hadsell · Peter Battaglia -
2018 Poster: TACO: Learning Task Decomposition via Temporal Alignment for Control »
Kyriacos Shiarlis · Markus Wulfmeier · Sasha Salter · Shimon Whiteson · Ingmar Posner -
2018 Oral: Learning by Playing - Solving Sparse Reward Tasks from Scratch »
Martin Riedmiller · Roland Hafner · Thomas Lampe · Michael Neunert · Jonas Degrave · Tom Van de Wiele · Vlad Mnih · Nicolas Heess · Jost Springenberg -
2018 Oral: Graph Networks as Learnable Physics Engines for Inference and Control »
Alvaro Sanchez-Gonzalez · Nicolas Heess · Jost Springenberg · Josh Merel · Martin Riedmiller · Raia Hadsell · Peter Battaglia -
2018 Oral: TACO: Learning Task Decomposition via Temporal Alignment for Control »
Kyriacos Shiarlis · Markus Wulfmeier · Sasha Salter · Shimon Whiteson · Ingmar Posner -
2017 Poster: Coresets for Vector Summarization with Applications to Network Graphs »
Dan Feldman · Sedat Ozer · Daniela Rus -
2017 Talk: Coresets for Vector Summarization with Applications to Network Graphs »
Dan Feldman · Sedat Ozer · Daniela Rus