ICML 2022
Skip to yearly menu bar Skip to main content


Principles of Distribution Shift (PODS)

Elan Rosenfeld · Saurabh Garg · Shibani Santurkar · Jamie Morgenstern · Hossein Mobahi · Zachary Lipton · Andrej Risteski

Ballroom 3

The importance of robust predictions continues to grow as machine learning models are increasingly relied upon in high-stakes settings. Ensuring reliability in real-world applications remains an enormous challenge, particularly because data in the wild frequently differs substantially from the data on which models were trained. This phenomenon, broadly known as “distribution shift”, has become a major recent focus of the research community.

With the growing interest in addressing this problem has come growing awareness of the multitude of possible meanings of “distribution shift” and the importance of understanding the distinctions between them: which types of shift occur in the real world, and under which of these is generalization feasible? Negative results seem just as common as positive ones; where provable generalization is possible, it often depends on strong structural assumptions whose likelihood of holding in reality is questionable. Existing approaches often lack rigor and clarity with regards to the precise problem they are trying to solve. Some work has been done to precisely define distribution shift and to produce benchmarks which properly reflect real-world distribution shift, but overall there seems to be little communication between the communities tackling foundations and applications respectively. Recent strides have been made to move beyond tinkering, bringing much needed rigor to the field, and we hope to encourage this effort by opening a dialogue to share ideas between these communities.

Chat is not available.
Timezone: America/Los_Angeles