Timezone: »

Is Bang-Bang Control All You Need?
Tim Seyde · Igor Gilitschenski · Wilko Schwarting · Bartolomeo Stellato · Martin Riedmiller · Markus Wulfmeier · Daniela Rus

Reinforcement learning (RL) for continuous control typically employs distributions whose support covers the entire action space. In this work, we investigate the colloquially known phenomenon that trained agents often prefer actions at the boundaries of that space. We draw theoretical connections to the emergence of bang-bang behavior in optimal control, and provide extensive empirical evaluation across a variety of recent RL algorithms. We replace the normal Gaussian by a Bernoulli distribution that solely considers the extremes along each action dimension - a bang-bang controller. Surprisingly, this achieves state-of-the-art performance on several continuous control benchmarks - in contrast to robotic hardware, where energy and maintenance cost affect controller choices.To reduce the impact of exploration on our analysis, we provide additional imitation learning experiments. Finally, we show that our observations extend to environments that aim to model real-world challenges and evaluate factors to mitigate the emergence of bang-bang solutions.Our findings emphasise challenges for benchmarking continuous control algorithms, particularly in light of real-world applications.

Author Information

Tim Seyde (MIT)
Igor Gilitschenski (Massachusetts Institute of Technology)
Wilko Schwarting (Massachusetts Institute of Technology)
Bartolomeo Stellato (Princeton University)
Martin Riedmiller (DeepMind)
Markus Wulfmeier (DeepMind)
Daniela Rus (MIT CSAIL)

More from the Same Authors