Skip to yearly menu bar Skip to main content


Workshop

Theoretical Foundations of Reinforcement Learning

Emma Brunskill · Thodoris Lykouris · Max Simchowitz · Wen Sun · Mengdi Wang

Keywords:  Bandits    Representation Learning    Reinforcement Learning    sample-efficient exploration    policy gradient    safety in RL    human-in-the-loop RL    mutli-agent RL    off-policy learning  

In many settings such as education, healthcare, drug design, robotics, transportation, and achieving better-than-human performance in strategic games, it is important to make decisions sequentially. This poses two interconnected algorithmic and statistical challenges: effectively exploring to learn information about the underlying dynamics and effectively planning using this information. Reinforcement Learning (RL) is the main paradigm tackling both of these challenges simultaneously which is essential in the aforementioned applications. Over the last years, reinforcement learning has seen enormous progress both in solidifying our understanding on its theoretical underpinnings and in applying these methods in practice.

This workshop aims to highlight recent theoretical contributions, with an emphasis on addressing significant challenges on the road ahead. Such theoretical understanding is important in order to design algorithms that have robust and compelling performance in real-world applications. As part of the ICML 2020 conference, this workshop will be held virtually. It will feature keynote talks from six reinforcement learning experts tackling different significant facets of RL. It will also offer the opportunity for contributed material (see below the call for papers and our outstanding program committee). The authors of each accepted paper will prerecord a 10-minute presentation and will also appear in a poster session. Finally, the workshop will have a panel discussing important challenges in the road ahead.

Chat is not available.
Timezone: America/Los_Angeles

Schedule