Timezone: »

 
Talk
Input Switched Affine Networks: An RNN Architecture Designed for Interpretability
Jakob Foerster · Justin Gilmer · Jan Chorowski · Jascha Sohl-Dickstein · David Sussillo

Mon Aug 07 06:42 PM -- 07:00 PM (PDT) @ Parkside 1

There exist many problem domains where the interpretability of neural network models is essential for deployment. Here we introduce a recurrent architecture composed of input-switched affine transformations - in other words an RNN without any explicit nonlinearities, but with input-dependent recurrent weights. This simple form allows the RNN to be analyzed via straightforward linear methods: we can exactly characterize the linear contribution of each input to the model predictions; we can use a change-of-basis to disentangle input, output, and computational hidden unit subspaces; we can fully reverse-engineer the architecture's solution to a simple task. Despite this ease of interpretation, the input switched affine network achieves reasonable performance on a text modeling tasks, and allows greater computational efficiency than networks with standard nonlinearities.

Author Information

Jakob Foerster (University of Oxford)
Justin Gilmer (Google Brain)
Jan Chorowski (Google Brain)
Jascha Sohl-Dickstein (Google Brain)
David Sussillo (Google Brain, Google Inc.)

Broadly speaking, I am interested intersection of deep learning and computational neuroscience. More narrowly, this decomposes into an interest in dynamics, intelligible machine learning, recurrent neural networks, & generative models.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors