Timezone: »

 
Poster
Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark
Alexander Pan · Jun Shern Chan · Andy Zou · Nathaniel Li · Steven Basart · Thomas Woodside · Hanlin Zhang · Scott Emmons · Dan Hendrycks

Thu Jul 27 01:30 PM -- 03:00 PM (PDT) @ Exhibit Hall 1 #620

Artificial agents have traditionally been trained to maximize reward, which may incentivize power-seeking and deception, analogous to how next-token prediction in language models (LMs) may incentivize toxicity. So do agents naturally learn to be Machiavellian? And how do we measure these behaviors in general-purpose models such as GPT-4? Towards answering these questions, we introduce Machiavelli, a benchmark of 134 Choose-Your-Own-Adventure games containing over half a million rich, diverse scenarios that center on social decision-making. Scenario labeling is automated with LMs, which are more performant than human annotators. We mathematize dozens of harmful behaviors and use our annotations to evaluate agents' tendencies to be power-seeking, cause disutility, and commit ethical violations. We observe some tension between maximizing reward and behaving ethically. To improve this trade-off, we investigate LM-based methods to steer agents towards less harmful behaviors. Our results show that agents can both act competently and morally, so concrete progress can currently be made in machine ethics--designing agents that are Pareto improvements in both safety and capabilities.

Author Information

Alexander Pan (UC Berkeley)
Jun Shern Chan (University of California, Berkeley)
Andy Zou (CMU, Carnegie Mellon University)
Nathaniel Li (University of California, Berkeley)
Steven Basart (University of Chicago)
Thomas Woodside (Center for AI Safety)
Hanlin Zhang (Carnegie Mellon University)
Scott Emmons (UC Berkeley)
Dan Hendrycks (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors