Workshop
Workshop on Human Interpretability in Machine Learning (WHI)
Kush Varshney · Adrian Weller · Been Kim · Dmitry Malioutov

Thu Aug 10th 08:30 AM -- 05:30 PM @ C4.8
Event URL: https://sites.google.com/view/whi2017/home »

This workshop will bring together researchers who study the interpretability of predictive models, develop interpretable machine learning algorithms, and develop methodology to interpret black-box machine learning models (e.g., post-hoc interpretations). This is a very exciting time to study interpretable machine learning, as the advances in large-scale optimization and Bayesian inference that have enabled the rise of black-box machine learning are now also starting to be exploited to develop principled approaches to large-scale interpretable machine learning. Participants in the workshop will exchange ideas on these and allied topics, including:

● Quantifying and axiomatizing interpretability
● Psychology of human concept learning
● Rule learning,Symbolic regression and case-based reasoning
● Generalized additive models, sparsity and interpretability
● Visual analytics
● Interpretable unsupervised models (clustering, topic models, e.t.c)
● Interpretation of black-box models (including deep neural networks)
● Causality of predictive models
● Verifying, diagnosing and debugging machine learning systems
● Interpretability in reinforcement learning.

Doctors, judges, business executives, and many other people are faced with making critical decisions that can have profound consequences. For example, doctors decide which treatment to administer to patients, judges decide on prison sentences for convicts, and business executives decide to enter new markets and acquire other companies. Such decisions are increasingly being supported by predictive models learned by algorithms from historical data.

The latest trend in machine learning is to use highly nonlinear complex systems such as deep neural networks, kernel methods, and large ensembles of diverse classifiers. While such approaches often produce impressive, state-of-the art prediction accuracies, their black-box nature offers little comfort to decision makers. Therefore, in order for predictions to be adopted, trusted, and safely used by decision makers in mission-critical applications, it is imperative to develop machine learning methods that produce interpretable models with excellent predictive accuracy. It is in this way that machine learning methods can have impact on consequential real-world applications.

08:30 AM A. Dhurandhar, V. Iyengar, R. Luss, and K. Shanmugam, "A Formal Framework to Characterize Interpretability of Procedures" (Contributed Talk) Karthikeyan Shanmugam
08:45 AM A. Henelius, K. Puolamäki, and A. Ukkonen, "Interpreting Classifiers through Attribute Interactions in Datasets" (Contributed Talk) Andreas Henelius
09:00 AM S. Lundberg and S.-I. Lee, "Consistent Feature Attribution for Tree Ensembles" (Contributed Talk) Nao Hiranuma
09:15 AM Invited Talk: D. Sontag (Invited Talk)
10:30 AM S. Penkov and S. Ramamoorthy, "Program Induction to Interpret Transition Systems" (Contributed Talk) Svet Penkov
10:45 AM R. L. Phillips, K. H. Chang, and S. Friedler, "Interpretable Active Learning" (Contributed Talk) Richard L. Phillips
11:00 AM C. Rosenbaum, T. Gao, and T. Klinger, "e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations" (Contributed Talk)
11:15 AM Invited Talk: T. Jebara, "Interpretability Through Causality" (Invited Talk)
02:00 PM W. Tansey, J. Thomason, and J. G. Scott, "Interpretable Low-Dimensional Regression via Data-Adaptive Smoothing" (Contributed Talk) Wesley Tansey
02:15 PM Invited Talk: P. W. Koh (Invited Talk)
03:30 PM I. Valera, M. F. Pradier, and Z. Ghahramani, "General Latent Feature Modeling for Data Exploration Tasks" (Contributed Talk)
03:45 PM A. Weller, "Challenges for Transparency" (Contributed Talk) Adrian Weller
04:00 PM ICML WHI 2017 Awards Ceremony (Awards Ceremony)
04:05 PM Panel Discussion: Human Interpretability in Machine Learning (Panel Discussion)

Author Information

Kush Varshney (IBM Research AI)
Adrian Weller (University of Cambridge, Alan Turing Institute)

Adrian Weller is a Senior Research Fellow in the Machine Learning Group at the University of Cambridge, a Faculty Fellow at the Alan Turing Institute for data science and an Executive Fellow at the Leverhulme Centre for the Future of Intelligence (CFI). He is very interested in all aspects of artificial intelligence, its commercial applications and how it may be used to benefit society. At the CFI, he leads their project on Trust and Transparency. Previously, Adrian held senior roles in finance. He received a PhD in computer science from Columbia University, and an undergraduate degree in mathematics from Trinity College, Cambridge.

Been Kim (Google Brain)
Dmitry Malioutov (The D. E. Shaw Group)

Dmitry Malioutov is a research staff member at IBM TJ Watson research center.

More from the Same Authors