Skip to yearly menu bar Skip to main content


PAC-Bayesian Offline Contextual Bandits With Guarantees

Otmane Sakhi · Pierre Alquier · Nicolas Chopin

Exhibit Hall 1 #308
[ ]
[ PDF [ Poster


This paper introduces a new principled approach for off-policy learning in contextual bandits. Unlike previous work, our approach does not derive learning principles from intractable or loose bounds. We analyse the problem through the PAC-Bayesian lens, interpreting policies as mixtures of decision rules. This allows us to propose novel generalization bounds and provide tractable algorithms to optimize them. We prove that the derived bounds are tighter than their competitors, and can be optimized directly to confidently improve upon the logging policy offline. Our approach learns policies with guarantees, uses all available data and does not require tuning additional hyperparameters on held-out sets. We demonstrate through extensive experiments the effectiveness of our approach in providing performance guarantees in practical scenarios.

Chat is not available.