Timezone: »
Reinforcement learning algorithms require many samples when solving complex hierarchical tasks with sparse and delayed rewards. For such complex tasks, the recently proposed RUDDER uses reward redistribution to leverage steps in the Q-function that are associated with accomplishing sub-tasks. However, often only few episodes with high rewards are available as demonstrations since current exploration strategies cannot discover them in reasonable time. In this work, we introduce Align-RUDDER, which utilizes a profile model for reward redistribution that is obtained from multiple sequence alignment of demonstrations. Consequently, Align-RUDDER employs reward redistribution effectively and, thereby, drastically improves learning on few demonstrations. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the Minecraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently. Code is available at github.com/ml-jku/align-rudder.
Author Information
Vihang Patil (LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria)
Markus Hofmarcher (ELLIS Unit Linz, Johannes Kepler University Linz)
Marius-Constantin Dinu (LIT AI Lab / University Linz)
Matthias Dorfer (enliteAI)
Patrick Blies (EnliteAI GmbH)
Johannes Brandstetter (Microsoft Research)
Jose A. Arjona-Medina (Dynatrace Research)
Sepp Hochreiter (ELLIS Unit Linz, LIT AI Lab, Institute for Machine Learning, Johannes Kepler University, Institute for Advanced Research in Artificial Intelligence (IARAI))

Sepp Hochreiter is heading the Institute for Machine Learning, the ELLIS Unit Linz, the LIT AI Lab at the JKU Linz and is director of private research institute IARAI. He is a pioneer of Deep Learning as he discovered the famous problem of vanishing or exploding gradients and invented the long short-term memory (LSTM).
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution »
Tue. Jul 19th through Wed the 20th Room Hall E #826
More from the Same Authors
-
2023 Poster: Enhancing Activity Prediction Models in Drug Discovery with the Ability to Understand Human Language »
Philipp Seidl · Andreu Vall · Sepp Hochreiter · Günter Klambauer -
2022 Poster: Lie Point Symmetry Data Augmentation for Neural PDE Solvers »
Johannes Brandstetter · Max Welling · Daniel Worrall -
2022 Spotlight: Lie Point Symmetry Data Augmentation for Neural PDE Solvers »
Johannes Brandstetter · Max Welling · Daniel Worrall -
2022 Poster: History Compression via Language Models in Reinforcement Learning »
Fabian Paischer · Thomas Adler · Vihang Patil · Angela Bitto-Nemling · Markus Holzleitner · Sebastian Lehner · Hamid Eghbal-zadeh · Sepp Hochreiter -
2022 Spotlight: History Compression via Language Models in Reinforcement Learning »
Fabian Paischer · Thomas Adler · Vihang Patil · Angela Bitto-Nemling · Markus Holzleitner · Sebastian Lehner · Hamid Eghbal-zadeh · Sepp Hochreiter -
2021 Spotlight: MC-LSTM: Mass-Conserving LSTM »
Pieter-Jan Hoedt · Frederik Kratzert · Daniel Klotz · Christina Halmich · Markus Holzleitner · Grey Nearing · Sepp Hochreiter · Günter Klambauer -
2021 Poster: MC-LSTM: Mass-Conserving LSTM »
Pieter-Jan Hoedt · Frederik Kratzert · Daniel Klotz · Christina Halmich · Markus Holzleitner · Grey Nearing · Sepp Hochreiter · Günter Klambauer -
2021 : Talk 2 »
Johannes Brandstetter -
2021 Expo Talk Panel: Unique Research Opportunities in AI Algorithms, Health, Traffic, and Weather »
Johannes Brandstetter · Sepp Hochreiter · Michael Kopp · David P Kreil · Alina Mihai