Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

AI Alignment with Changing and Influenceable Reward Functions

Micah Carroll · Davis Foote · Anand Siththaranjan · Stuart Russell · Anca Dragan

Keywords: [ preference changes; influence ]


Abstract:

Existing AI alignment approaches assume that preferences are static, which is unrealistic: our preferences change, and may even be influenced by our interactions with AI systems themselves. To clarify the consequences of incorrectly assuming static preferences, we introduce Dynamic Reward Markov Decision Processes (DR-MDPs), which explicitly model preference changes and AI influence.We show that despite its convenience, the static-preference assumption may undermine the soundness of existing alignment techniques, leadingthem to implicitly reward AI systems for influencing user preferences in ways they may not truly want. We then explore potential solutions, by formalizing different notions of AI alignment which account for preference change from the get-go. Comparing the strengths and limitations of 8 such notions of alignment, we find that they all either err towards causing undesirable AI influence, or are overly risk-averse, suggesting that there may not exist a straightforward solution to problems of changing preferences. As there is no avoiding grappling with changing preferences in real-world settings, this makes it all the more important to handle these issues with care, balancing risks and capabilities. We hope our work can provide conceptual clarity and constitute a first step towards AI alignment practices which \textit{explicitly} account for (and contend with) the changing and influenceable nature of human preferences.

Chat is not available.