Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: 2nd Workshop on Models of Human Feedback for AI Alignment (MoFA)
Fri, Jul 18, 2025 • 1:30 PM – 2:05 PM PDT

Personalization and pluralistic alignment of LLMs via reinforcement learning fine-tuning

Natasha Jaques

Abstract

Video

Chat is not available.