Timezone: »

Strategic Representation
Vineet Nair · Ganesh Ghalme · Inbal Talgam-Cohen · Nir Rosenfeld

Thu Jul 21 08:55 AM -- 09:00 AM (PDT) @ Room 307

Humans have come to rely on machines for reducing excessive information to manageable representations. But this reliance can be abused -- strategic machines might craft representations to manipulate their users. How can a user make good choices on strategic representations? We formalize this as a learning problem, and pursue algorithms for decision-making that are robust to manipulation. In our main setting of interest, the system represents attributes of an item to the user, who decides whether or not to consume. We model this interaction through the lens of strategic classification (Hardt et al. 2016), but reversed: the user, who learns, plays first; and the system, which responds, plays second. The system must respond with representations that reveal `nothing but the truth', but need not reveal the entire truth; thus, the user faces the problem of learning set functions under strategic constraint. This presents distinct algorithmic and statistical challenges. Our main result is a learning algorithm that minimizes error despite strategic representations, and our analysis sheds light on the trade-off between learning effort and susceptibility to manipulation.

Author Information

Vineet Nair (Technion)
Ganesh Ghalme (Indian Institute of Technology, Hyderabad)
Inbal Talgam-Cohen (Technion)
Nir Rosenfeld (Technion)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors