Skip to yearly menu bar Skip to main content


Workshop

2nd Workshop on Models of Human Feedback for AI Alignment (MoFA)

Belen Martin Urcelay · Micah Carroll · Maria Teresa Parreira · Thomas Kleine Buening · Andreas Krause · Anca Dragan

Our workshop brings together experts in machine learning, cognitive science, behavioral psychology, and economics to explore human-AI alignment by examining human (and AI) feedback mechanisms, their mathematical models, and practical implications. By fostering collaboration between technical and behavioral science communities, we aim to develop more realistic models of human feedback that can better inform the development of aligned AI systems.

Live content is unavailable. Log in and register to view live content