Timezone: »

Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift
Christina Baek · Yiding Jiang · aditi raghunathan · Zico Kolter

Recently, Miller et al. showed that a model's in-distribution (ID) accuracy has a strong linear correlation with its out-of-distribution (OOD) accuracy, on several OOD benchmarks, a phenomenon they dubbed ``accuracy-on-the-line''. While a useful tool for model selection, this fact does not help to estimate the actual OOD performance of models without access to a labeled OOD validation set. In this paper, we show a similar surprising phenomenon also holds for the agreement between pairs of neural network classifiers: whenever accuracy-on-the-line holds, the OOD agreement between the predictions of any two pairs of neural networks is linearly correlated with ID agreement. Furthermore, we observe that the slope and bias of OOD vs ID agreement closely matches that of OOD vs ID accuracy. This phenomenon which we call agreement-on-the-line, has important practical applications: without any labeled data, we can predict the OOD accuracy of classifiers, since OOD agreement can be estimated with just unlabeled data. Our prediction algorithm outperforms previous methods both in shifts where agreement-on-the-line holds and, surprisingly, when accuracy is not on the line.

Author Information

Christina Baek (Carnegie Mellon University)
Yiding Jiang
aditi raghunathan (stanford university)
Zico Kolter (Carnegie Mellon University / Bosch Center for AI)

More from the Same Authors