Timezone: »

 
Oral
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen · Le Song · Martin Wainwright · Michael Jordan

Wed Jul 11 02:00 AM -- 02:20 AM (PDT) @ K1 + K2

We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the responsevariable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.

Author Information

Jianbo Chen (University of California, Berkeley)

I am a Ph.D. candidate in Statistics at University of California, Berkeley working with Michael I. Jordan. I am also a member of Berkeley Artificial Intelligence Research (BAIR) Lab. My current research interests lie in machine learning, optimization and high dimensional statistics.

Le Song (Georgia Institute of Technology)
Martin Wainwright (University of California at Berkeley)
Michael Jordan (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors