Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability

Pruning for Better Domain Generalizability

Xinglong Sun


Abstract:

In this paper, we investigate whether we coulduse pruning as a reliable method to boost thegeneralization ability of the model. We foundthat existing pruning method like L2 can alreadyoffer small improvement on the target domainperformance. We further propose a novel pruningscoring method, called DSS, designed notto maintain source accuracy as typical pruningwork, but to directly enhance the robustness ofthe model. We conduct empirical experiments tovalidate our method and demonstrate that it canbe even combined with state-of-the-art generalizationwork like MIRO(Cha et al., 2022) to furtherboost the performance. On MNIST to MNIST-M,we could improve the baseline performance byover 5 points by introducing 60% channel sparsityinto the model. On DomainBed benchmarkand state-of-the-art MIRO, we can further boostits performance by 1 point only by introducing 10% sparsityinto the model.

Chat is not available.