Timezone: »

Evaluating Self-Supervised Learning via Risk Decomposition
Yann Dubois · Tatsunori Hashimoto · Percy Liang

Wed Jul 26 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #220

Self-supervised learning (SSL) is typically evaluated using a single metric (linear probing on ImageNet), which neither provides insight into tradeoffs between models nor highlights how to improve them. To address this, we propose an SSL risk decomposition, which generalizes the classical approximation-estimation decomposition. Our decomposition consists of four error terms: approximation, representation usability, probe generalization, and encoder generalization. We provide efficient estimators for each term and use them to analyze the effect of 30 design choices on 169 SSL vision models evaluated on ImageNet. Our analysis gives valuable insights for designing and using SSL models. For example, it highlights the main source of errors and shows how to improve SSL in specific settings (full- vs few-shot) by trading off error components.

Author Information

Yann Dubois (Stanford University)
Tatsunori Hashimoto (Stanford)
Percy Liang (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors