Timezone: »
In real-world scenarios, machine learning systems may continually experience distributional shifts due to many different factors in the test environment, which makes the predictions unreliable. For this reason, it is important to learn a model that can robustly adapt to the environment in an online manner. In this work, we propose to meta-learn how to guide unsupervised online adaptation by taking into account the uncertainty in the predictions. Generally, all unlabeled test samples are equally incorporated for online test-time adaptation. However, uncertain samples can negatively affect the adaptation performance. Thus, we enable the model to adaptively learn test samples by quantifying the uncertainty during test-time online adaptation. We experimentally show that our uncertainty-guided online adaptation provides improved robustness and adaptation performance during test-time on image classification tasks with some distributional shift.
Author Information
kyubyung chae (Seoul National University)
Taesup Kim (Seoul National University)
More from the Same Authors
-
2022 : Adaptive Interest for Emphatic Reinforcement Learning »
Martin Klissarov · Rasool Fakoor · Jonas Mueller · Kavosh Asadi · Taesup Kim · Alex Smola -
2022 : Efficient Task Adaptation by Mixing Discovered Skills »
Eunseok Yang · JUNGSUB RHIM · Taesup Kim -
2023 : UOTA: Unsupervised Open-Set Task Adaptation Using a Vision-Language Foundation Model »
Youngjo Min · Kwangrok Ryoo · Bumsoo Kim · Taesup Kim -
2023 Poster: Flexible Model Aggregation for Quantile Regression »
Rasool Fakoor · Taesup Kim · Jonas Mueller · Alexander Smola · Ryan Tibshirani