Poster
in
Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability
Tackling Shortcut Learning in Deep Neural Networks: An Iterative Approach with Interpretable Models
Shantanu Ghosh · Ke Yu · Forough Arabshahi · Kayhan Batmanghelich
Abstract:
We use concept-based interpretable models to mitigate shortcut learning. Existing methods lack interpretability.Beginning with a Blackbox, we iteratively carve out a mixture of interpretable experts (MoIE) and a residual network. Each expert explains a subset of data using First Order Logic (FOL). While explaining a sample, the FOL from biased BB-derived MoIE detects the shortcut effectively. Finetuning the BB with Metadata Normalization (MDN) eliminates the shortcut. The FOLs from the finetuned-BB-derived MoIE verify the elimination of the shortcut. Our experiments show that MoIE does not hurt the accuracy of the original BB and eliminates shortcuts effectively.
Chat is not available.