Timezone: »
Biological agents have meaningful interactions with their environment despite the absence of a reward signal. In such instances, the agent can learn preferred modes of behaviour that lead to predictable states - necessary for survival. In this paper, we pursue the notion that this learnt behaviour can be a consequence of reward-free preference learning that ensures an appropriate trade-off between exploration and preference satisfaction. For this, we introduce a model-based Bayesian agent equipped with a preference learning mechanism (pepper) using conjugate priors. These conjugate priors are used to augment the expected free energy planner for learning preferences over states (or outcomes) across time. Importantly, our approach enables the agent to learn preferences that encourage adaptive behaviour at test time. We illustrate this in the OpenAI Gym FrozenLake and the 3D mini-world environments -- with and without volatility. Given a constant environment, these agents learn confident (i.e., precise) preferences and act to satisfy them. Conversely, in a volatile setting, perpetual preference uncertainty maintains exploratory behaviour. Our experiments suggest that learnable (reward-free) preferences entail a trade-off between exploration and preference satisfaction. Pepper offers a straightforward framework suitable for designing adaptive agents when reward functions cannot be predefined as in real environments.
Author Information
Noor Sajid (WCHN, University College London)
Panagiotis Tigas (Oxford University)
Alexey Zakharov (Imperial College London)
Zafeirios Fountas (Huawei technologies)
Karl Friston (University College London)
More from the Same Authors
-
2021 : Episodic Memory for Subjective-Timescale Models »
Alexey Zakharov · Matthew Crosby · Zafeirios Fountas -
2021 : Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data »
Andrew Jesson · Panagiotis Tigas · Joost van Amersfoort · Andreas Kirsch · Uri Shalit · Yarin Gal -
2023 Poster: Differentiable Multi-Target Causal Bayesian Experimental Design »
Panagiotis Tigas · Yashas Annadani · Desi Ivanova · Andrew Jesson · Yarin Gal · Adam Foster · Stefan Bauer -
2022 Affinity Workshop: Women in Machine Learning (WiML) Un-Workshop »
Vinitra Swamy · Paula Gradu · Mojgan Saeidi · Noor Sajid · Shweta Khushu · Giulia Clerici · Tatjana Chavdarova -
2021 : Exploration and preference satisfaction trade-off in reward-free learning »
Noor Sajid -
2020 : Paper spotlight: Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts? »
Angelos Filos · Panagiotis Tigas -
2020 Poster: Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts? »
Angelos Filos · Panagiotis Tigas · Rowan McAllister · Nicholas Rhinehart · Sergey Levine · Yarin Gal