Tutorial
Bridging Learning and Decision Making
Dylan Foster · Alexander Rakhlin
Moderator : Pulkit Agrawal
Ballroom 1 & 2
This tutorial will give an overview of the theoretical foundations of interactive decision making (high-dimensional/contextual bandits, reinforcement learning, and beyond), a promising paradigm for developing AI systems capable of intelligently exploring unknown environments. The tutorial will focus on connections and parallels between supervised learning/estimation and decision making, and will build on recent research which provides (i) sample complexity measures for interactive decision making that are necessary and sufficient for sample-efficient learning, and (ii) unified algorithm design principles that achieve optimal sample complexity. Using this unified approach as a foundation, the main aim of the tutorial will be to give a bird’s-eye view of the statistical landscape of reinforcement learning (e.g., what modeling assumptions lead to sample-efficient algorithms). Topics covered will range from basic challenges and solutions (exploration in tabular RL, policy gradient methods, contextual bandits) to the current frontier of understanding. We will also highlight practical algorithms.
Schedule
Mon 10:00 a.m. - 10:55 a.m.
|
Bridging Learning and Decision Making: Part I
(
Tutorial
)
>
SlidesLive Video |
Alexander Rakhlin 🔗 |
Mon 10:55 a.m. - 10:55 a.m.
|
Q&A
(
Q&A
)
>
|
Dylan Foster · Alexander Rakhlin 🔗 |
Mon 11:00 a.m. - 11:55 a.m.
|
Bridging Learning and Decision Making: Part II
(
Tutorial
)
>
SlidesLive Video |
Dylan Foster 🔗 |
Mon 11:55 a.m. - 12:00 p.m.
|
Q&A II
(
Q&A
)
>
|
Dylan Foster · Alexander Rakhlin 🔗 |