Skip to yearly menu bar Skip to main content


Spotlight

What's in the Box? Exploring the Inner Life of Neural Networks with Robust Rules

Jonas Fischer · Anna Olah · Jilles Vreeken

[ ] [ Livestream: Visit Deep Learning 1 ] [ Paper ]
[ Paper ]

Abstract:

We propose a novel method for exploring how neurons within neural networks interact. In particular, we consider activation values of a network for given data, and propose to mine noise-robust rules of the form X → Y , where X and Y are sets of neurons in different layers. We identify the best set of rules by the Minimum Description Length Principle as the rules that together are most descriptive of the activation data. To learn good rule sets in practice, we propose the unsupervised ExplaiNN algorithm. Extensive evaluation shows that the patterns it discovers give clear insight in how networks perceive the world: they identify shared, respectively class-specific traits, compositionality within the network, as well as locality in convolutional layers. Moreover, these patterns are not only easily interpretable, but also supercharge prototyping as they identify which groups of neurons to consider in unison.

Chat is not available.