Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Models of Human Feedback for AI Alignment

AI Alignment with Changing and Influenceable Reward Functions

Micah Carroll · Davis Foote · Anand Siththaranjan · Stuart Russell · Anca Dragan

[ ] [ Project Page ]
Fri 26 Jul 12:50 a.m. PDT — 1 a.m. PDT
 
presentation: Models of Human Feedback for AI Alignment
Fri 26 Jul midnight PDT — 8 a.m. PDT

Abstract:

Existing AI alignment approaches assume that preferences are static, which is unrealistic: our preferences change, and may even be influenced by our interactions with AI systems themselves. To clarify the consequences of incorrectly assuming static preferences, we introduce Dynamic Reward Markov Decision Processes (DR-MDPs), which explicitly model preference changes and AI influence.We show that despite its convenience, the static-preference assumption may undermine the soundness of existing alignment techniques, leadingthem to implicitly reward AI systems for influencing user preferences in ways they may not truly want. We then explore potential solutions. First, we offer a unifying perspective on how agents' optimization horizon may partially help reduce undesirable AI influence. Then, we formalize different notions of AI alignment which account for preference change from the get-go. Comparing the strengths and limitations of 8 such notions of alignment, we find that they all either err towards causing undesirable AI influence, or are overly risk-averse, suggesting that there may not exist a straightforward solution to problems of changing preferences. As there is no avoiding grappling with changing preferences in real-world settings, this makes it all the more important to handle these issues with care, balancing risks and capabilities. We hope our work can provide conceptual clarity and constitute a first step towards AI alignment practices which \textit{explicitly} account for (and contend with) the changing and influenceable nature of human preferences.

Chat is not available.