Timezone: »

Updatable Machine Learning
Ayush Sekhari · Gautam Kamath · Jayadev Acharya

Sat Jul 23 05:55 AM -- 02:30 PM (PDT) @ Ballroom 2
Event URL: https://upml2022.github.io/ »

In modern ML domains, state-of-the-art performance is attained by highly overparameterized models that are expensive to train, costing weeks of time and millions of dollars. At the same time, after deploying the model, the learner may realize issues such as leakage of private data or vulnerability to adversarial examples. The learner may also wish to impose additional constraints post-deployment, for example, to ensure fairness for different subgroups. Retraining the model from scratch to incorporate additional desiderata would be expensive. As a consequence, one would instead prefer to update the model, which can yield significant savings of resources such as time, computation, and memory over retraining from scratch. Some instances of this principle in action include the emerging field of machine unlearning, and the celebrated paradigm of fine-tuning pretrained models. The goal of our workshop is to provide a platform to stimulate discussion about both the state-of-the-art in updatable ML and future challenges in the field.

Author Information

Ayush Sekhari (Cornell University)
Gautam Kamath (University of Waterloo)
Jayadev Acharya (Cornell University)

More from the Same Authors