Skip to yearly menu bar Skip to main content


Workshop

PAC-Bayes Meets Interactive Learning

Hamish Flynn · Maxime Heuillet · Audrey Durand · Melih Kandemir · Benjamin Guedj

Meeting Room 314

Interactive learning encompasses online learning, continual learning, active learning, bandits, reinforcement learning, and other settings where an algorithm must learn while interacting with a continual stream of data. Such problems often involve exploration-exploitation dilemmas, which can be elegantly handled with probabilistic and Bayesian methods. Deep interactive learning methods leveraging neural networks are typically used when the setting involves rich observations, such as images. As a result, both probabilistic and deep interactive learning methods are growing in popularity. However, acquiring observations in an interactive fashion with the environment can be costly. There is therefore great interest in understanding when sample-efficient learning with probabilistic and deep interactive learning can be expected or guaranteed. Within statistical learning theory, PAC-Bayesian theory is designed for the analysis of probabilistic learning methods. It has recently been shown to be well-suited for the analysis of deep learning methods. This workshop aims to bring together researchers from the broad Bayesian and interactive learning communities in order to foster the emergence of new ideas that could contribute to both theoretical and empirical advancement of PAC-Bayesian theory in interactive learning settings.

Chat is not available.
Timezone: America/Los_Angeles

Schedule