In modern ML domains, state-of-the-art performance is attained by highly overparameterized models that are expensive to train, costing weeks of time and millions of dollars. At the same time, after deploying the model, the learner may realize issues such as leakage of private data or vulnerability to adversarial examples. The learner may also wish to impose additional constraints post-deployment, for example, to ensure fairness for different subgroups. Retraining the model from scratch to incorporate additional desiderata would be expensive. As a consequence, one would instead prefer to update the model, which can yield significant savings of resources such as time, computation, and memory over retraining from scratch. Some instances of this principle in action include the emerging field of machine unlearning, and the celebrated paradigm of fine-tuning pretrained models. The goal of our workshop is to provide a platform to stimulate discussion about both the state-of-the-art in updatable ML and future challenges in the field.
Sat 5:55 a.m. - 2:30 p.m.
|
Please visit the workshop website for the full program ( Program ) link » | 🔗 |
Sat 5:55 a.m. - 6:00 a.m.
|
Opening Remarks
(
Talk
)
SlidesLive Video » |
🔗 |
Sat 6:00 a.m. - 6:30 a.m.
|
Or Zamir: Planting undetectable backdoors in Machine Learning models
(
Invited Speaker
)
SlidesLive Video » |
🔗 |
Sat 6:30 a.m. - 7:00 a.m.
|
Aaron Roth: An Algorithmic Framework for Bias Bounties
(
Invited Speaker
)
SlidesLive Video » |
🔗 |
Sat 7:00 a.m. - 7:30 a.m.
|
Short Break
|
🔗 |
Sat 7:30 a.m. - 8:30 a.m.
|
Spotlight talks 1
(
Short talks
)
SlidesLive Video » Spotlight talks 1 From Adaptive Query Release to Machine Unlearning Enayat Ullah, Raman Arora Geometric Alignment Improves Fully Test Time Adaptation Kowshik Thopalli, Pavan K. Turaga, Jayaraman J. Thiagarajan Modeling the Right to Be Forgotten Aloni Cohen, Adam Smith, Marika Swanberg, Prashant Nalini Vasudevan Revisiting the Updates of a Pre-trained Model for Few-shot Learning Yujin Kim, Jaehoon Oh, Sungnyun Kim, Se-Young Yun Super Seeds: extreme model compression by trading off storage with computation Nayoung Lee, Shashank Rajput, Jy-yong Sohn, Hongyi Wang, Alliot Nagle, Eric Xing, Kangwook Lee, Dimitris Papailiopoulos (*: equal contribution) Beyond Tabula Rasa: Reincarnating Reinforcement Learning Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, Marc G Bellemare |
🔗 |
Sat 8:30 a.m. - 9:00 a.m.
|
Chelsea Finn: Adapting Deep Networks to Distribution Shift with Minimal Assumptions
(
Invited Speaker
)
SlidesLive Video » |
🔗 |
Sat 9:00 a.m. - 10:30 a.m.
|
Lunch Break
Lunch on your own |
🔗 |
Sat 10:30 a.m. - 11:00 a.m.
|
Nicolas Papernot: What does it mean to unlearn?
(
Invited Speaker
)
SlidesLive Video » |
🔗 |
Sat 11:00 a.m. - 11:30 a.m.
|
Zico Kolter: Test-time adaptation via the convex conjugate
(
Invited Speaker
)
SlidesLive Video » |
🔗 |
Sat 11:30 a.m. - 12:00 p.m.
|
Spotlight talks 2
(
Short talks
)
SlidesLive Video » Simulating Bandit Learning from User Feedback for Extractive Question Answering Ge Gao, Eunsol Choi, Yoav Artzi How Adaptive are Adaptive Test-time Defenses? Francesco Croce, Sven Gowal, Thomas Brunner, Evan Shelhamer, Matthias Hein, Ali Taylan Cemgil Comparing Model and Input Updates for Test-Time Adaptation to Corruption Jin Gao, Jialing Zhang, Xihui Liu, Trevor Darrell, Evan Shelhamer, Dequan Wang |
🔗 |
Sat 12:00 p.m. - 12:30 p.m.
|
Break
|
🔗 |
Sat 12:30 p.m. - 2:30 p.m.
|
Poster session
|
🔗 |