Skip to yearly menu bar Skip to main content


Oral
in
Workshop: 2nd Workshop on Models of Human Feedback for AI Alignment (MoFA)
Fri, Jul 18, 2025 • 10:10 AM – 10:25 AM PDT

Few-shot Steerable Alignment: Adapting Rewards and LLM Policies with Neural Processes

Katarzyna Kobalczyk · Claudio Fanconi · Hao Sun · Mihaela van der Schaar

Abstract

Video

Chat is not available.