Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Socially Responsible Machine Learning

Stateful Performative Gradient Descent

Zachary Izzo · James Zou · Lexing Ying


Abstract:

A recent line of work has focused on training machine learning (ML) models in the performative setting, i.e. when the data distribution reacts to the deployed model. The goal in this setting is to compute a model which both induces a favorable distribution and performs well on the induced distribution, thereby minimizing the test loss. Previous work on finding an optimal model assumes that the data distribution immediately adapts to the deployed model. In practice, however, this may not be the case, as the population may take time to adapt to the model. In this work, we propose an algorithm for minimizing the performative loss even in the presence of these effects.

Chat is not available.