Timezone: »

Towards Environment-Invariant Representation Learning for Robust Task Transfer
Benjamin Eyre · Richard Zemel · Elliot Creager

To train a classification model that is robust to distribution shifts upon deployment, auxiliary labels indicating the various environments'' of data collection can be leveraged to mitigate reliance on environment-specific features. This paper investigates how to evaluate whether a model has formed environment-invariant representations, and proposes an objective that encourages learning such representations, as opposed to an invariant classifier. We also introduce a novel paradigm for evaluating environment-invariant performance, to determine if learned representations can robustly transfer to a new task.

#### Author Information

##### Benjamin Eyre (University of Toronto, Vector Institute)

I am a master's student at the University of Toronto where I am fortunate to be supervised by Professors Richard Zemel and Vardan Papyan. I am interested in researching techniques for creating learnt representations that are robust, explainable, and fair. I am also interested in the training dynamics at play when producing these representations.