Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: PAC-Bayes Meets Interactive Learning

Experiment Planning with Function Approximation

Aldo Pacchiano · Jonathan Lee · Emma Brunskill


Abstract:

We study the problem of experiment planning with function approximation in contextual bandit problems. In settings where there is a significant overhead to deploying adaptive algorithms; for example, when the execution of the data collection policies is required to be distributed, or a human in the loop is needed to implement these policies, producing in advance a set of policies for data collection is paramount. We study the setting where a large dataset of contexts -but not rewards- is available and may be used by the learner to design an effective data collection strategy. Although when rewards are linear this problem has been well studied, results are still missing for more complex reward models. In this work we propose two experiment planning strategies compatible with function approximation, first an eluder planning and sampling procedure that can recover optimality guarantees depending on the eluder dimension of the reward function class, and second we show the uniform sampler has competitive rates in the setting where the number of actions is small.

Chat is not available.