Timezone: »

 
Poster
Value Alignment Verification
Daniel Brown · Jordan Schneider · Anca Dragan · Scott Niekum

Tue Jul 20 09:00 AM -- 11:00 AM (PDT) @ None #None

As humans interact with autonomous agents to perform increasingly complicated, potentially risky tasks, it is important to be able to efficiently evaluate an agent's performance and correctness. In this paper we formalize and theoretically analyze the problem of efficient value alignment verification: how to efficiently test whether the behavior of another agent is aligned with a human's values? The goal is to construct a kind of "driver's test" that a human can give to any agent which will verify value alignment via a minimal number of queries. We study alignment verification problems with both idealized humans that have an explicit reward function as well as problems where they have implicit values. We analyze verification of exact value alignment for rational agents, propose and test heuristics for value alignment verification in gridworlds and a continuous autonomous driving domain, and prove that there exist sufficient conditions such that we can verify epsilon-alignment in any environment via a constant-query-complexity alignment test.

Author Information

Daniel Brown (UC Berkeley)
Jordan Schneider (UT Austin)
Anca Dragan (University of California, Berkeley)
Scott Niekum (University of Texas at Austin)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors