Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Dynamic Neural Networks

Back to the Source: Test-Time Diffusion-Driven Adaptation

Jin Gao · Jialing Zhang · Xihui Liu · Trevor Darrell · Evan Shelhamer · Dequan Wang


Abstract:

Test-time adaptation harnesses test inputs to im- prove the accuracy of a model trained on source data when tested on shifted target data. Existing methods update the source model by (re- )training on each target domain. While effective, re-training is sensitive to the amount and order of the data and the hyperparameters for optimization. We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model. Our diffusion-driven adaptation method, DDA, shares its models for classification and generation across all domains. Both models are trained on the source domain, then fixed during testing. We augment diffusion with image guidance and self-ensembling to automatically decide how much to adapt. Input adaptation by DDA is more robust than prior model adaptation approaches across a variety of corruptions, architectures, and data regimes on the ImageNet- C benchmark. With its input-wise updates, DDA succeeds where model adaptation degrades on too little data (small batches), on dependent data (non-random order), or on mixed data (multiple corruptions).

Chat is not available.