Skip to yearly menu bar Skip to main content


Spotlight

How to Learn when Data Reacts to Your Model: Performative Gradient Descent

Zachary Izzo · Lexing Ying · James Zou

[ ] [ Livestream: Visit Learning Theory 5 ] [ Paper ]
[ Paper ]

Abstract:

Performative distribution shift captures the setting where the choice of which ML model is deployed changes the data distribution. For example, a bank which uses the number of open credit lines to determine a customer's risk of default on a loan may induce customers to open more credit lines in order to improve their chances of being approved. Because of the interactions between the model and data distribution, finding the optimal model parameters is challenging. Works in this area have focused on finding stable points, which can be far from optimal. Here we introduce \emph{performative gradient descent} (PerfGD), an algorithm for computing performatively optimal points. Under regularity assumptions on the performative loss, PerfGD is the first algorithm which provably converges to an optimal point. PerfGD explicitly captures how changes in the model affects the data distribution and is simple to use. We support our findings with theory and experiments.

Chat is not available.