Skip to yearly menu bar Skip to main content


Invited Talk; Livestreamed
in
Workshop: Principles of Distribution Shift (PODS)

Can Fairness be Retained Over Distribution Shifts?

Shai Ben-David


Abstract:

Given the inherent difficulty of learning a model that is robust to data distribution shifts, much research focus has `shifted’ to learning data representations that are useful for learning good models for downstream, yet unknown, data distributions.

The primary aim of such models is accuracy generalization.

In this talk I wish to address an additional desideratum—model fairness.

On a high level, the question I am interested in is: to what extent and under what assumptions can one come up with data representations that are both “fair” and allow accurate predictions when applied to downstream tasks about which one has only limited information?

I will address different possible fairness requirements and provide some initial insights on what can, and more often, what cannot be achieved along this line.

Chat is not available.