On the Computational Complexity of Performative Prediction
Ioannis Anagnostides ⋅ Rohan Chauhan ⋅ Ioannis Panageas ⋅ Tuomas Sandholm ⋅ Jingming Yan
Abstract
Performative prediction captures the phenomenon where deploying a predictive model shifts the underlying data distribution. While simple retraining dynamics are known to converge linearly when the performative effects are weak ($\rho < 1$), the complexity in the regime $\rho > 1$ was hitherto open. In this paper, we establish a sharp phase transition: computing an $\epsilon$-performatively stable point is PPAD-complete---and thus polynomial-time equivalent to Nash equilibria in general-sum games---even when $\rho = 1 + O(\epsilon)$. This intractability persists even in the ostensibly simple setting with a quadratic loss function and linear distribution shifts. One of our key technical contributions is to extend this PPAD-hardness result to general convex domains, which is of broader interest in the complexity of variational inequalities. Finally, we address the special case of strategic classification, showing that computing a strategic local optimum is PLS-hard.
Successful Page Load