Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability

Exploring new ways: Enforcing representational dissimilarity to learn new features and reduce error consistency

Tassilo Wald · Constantin Ulrich · Fabian Isensee · David Zimmerer · Gregor Koehler · Michael Baumgartner · Klaus Maier-Hein


Abstract:

Independently trained machine learning modelstend to learn similar features. Given an ensem-ble of idependently trained models this results incorrelated predictions and common failure modes.Previous attempts focusing on decorrelation ofoutput predictions or logits yielded mixed results,particularly due to their reduction in model ac-curacy caused by conflicting optimization objec-tives. In this paper we propose the novel idea ofutilizing methods of the representational similar-ity field to promote dissimilarity during traininginstead of measuring similarity of trained mod-els. To this end we promote intermediate rep-resentations to be dissimilar at different depthsbetween architectures with the goal of learningrobust ensembles with disjoint failure modes. Weshow that highly dissimilar intermediate represen-tations result in less correlated output predictionsand slightly lower error consistency, resulting inhigher ensemble accuracy. With this we shinefirst light on the connection between intermedi-ate representations and their impact on the outputrepresentations.

Chat is not available.