Skip to yearly menu bar Skip to main content


Workshop

XXAI: Extending Explainable AI Beyond Deep Models and Classifiers

Wojciech Samek · Andreas HOLZINGER · Ruth Fong · Taesup Moon · Klaus-robert Mueller

Keywords:  Deep Learning    Interpretability    Explainable AI  

Over the years, ML models have steadily grown in complexity, gaining predictivity often at the expense of interpretability. An active research area called explainable AI (or XAI) has emerged with the goal to produce models that are both predictive and understandable. XAI has reached important successes, such as robust heatmap-based explanations of DNN classifiers. From an application perspective, there is now a need to massively engage into new scenarios such as explaining unsupervised / reinforcement learning, as well as producing explanations that are optimally structured for the human. In particular, our planned workshop will cover the following topics:
- Explaining beyond DNN classifiers: random forests, unsupervised learning, reinforcement learning
- Explaining beyond heatmaps: structured explanations, Q/A and dialog systems, human-in-the-loop
- Explaining beyond explaining: Improving ML models and algorithms, verifying ML, getting insights

XAI has received an exponential interest in the research community, and awareness of the need to explain ML models have grown in similar proportions in industry and in the sciences. With the sizable XAI research community that has formed, there is now a key opportunity to achieve this push towards successful applications. Our hope is that our proposed XXAI workshop can accelerate this process, foster a more systematic use of XAI to produce improvement on models in applications, and finally, also serves to better identify in which way current XAI methods need to be improved and what kind of theory of XAI is needed.

Chat is not available.
Timezone: America/Los_Angeles

Schedule