Skip to yearly menu bar Skip to main content


Workshop

Workshop on Human Interpretability in Machine Learning (WHI)

Kush Varshney · Adrian Weller · Been Kim · Dmitry Malioutov

C4.8

Wed 9 Aug, 3:30 p.m. PDT

This workshop will bring together researchers who study the interpretability of predictive models, develop interpretable machine learning algorithms, and develop methodology to interpret black-box machine learning models (e.g., post-hoc interpretations). This is a very exciting time to study interpretable machine learning, as the advances in large-scale optimization and Bayesian inference that have enabled the rise of black-box machine learning are now also starting to be exploited to develop principled approaches to large-scale interpretable machine learning. Participants in the workshop will exchange ideas on these and allied topics, including:

● Quantifying and axiomatizing interpretability
● Psychology of human concept learning
● Rule learning,Symbolic regression and case-based reasoning
● Generalized additive models, sparsity and interpretability
● Visual analytics
● Interpretable unsupervised models (clustering, topic models, e.t.c)
● Interpretation of black-box models (including deep neural networks)
● Causality of predictive models
● Verifying, diagnosing and debugging machine learning systems
● Interpretability in reinforcement learning.

Doctors, judges, business executives, and many other people are faced with making critical decisions that can have profound consequences. For example, doctors decide which treatment to administer to patients, judges decide on prison sentences for convicts, and business executives decide to enter new markets and acquire other companies. Such decisions are increasingly being supported by predictive models learned by algorithms from historical data.

The latest trend in machine learning is to use highly nonlinear complex systems such as deep neural networks, kernel methods, and large ensembles of diverse classifiers. While such approaches often produce impressive, state-of-the art prediction accuracies, their black-box nature offers little comfort to decision makers. Therefore, in order for predictions to be adopted, trusted, and safely used by decision makers in mission-critical applications, it is imperative to develop machine learning methods that produce interpretable models with excellent predictive accuracy. It is in this way that machine learning methods can have impact on consequential real-world applications.

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content