Timezone: »

XXAI: Extending Explainable AI Beyond Deep Models and Classifiers
Wojciech Samek · Andreas HOLZINGER · Ruth Fong · Taesup Moon · Klaus-robert Mueller

Thu Jul 16 11:45 PM -- 08:10 AM (PDT) @
Event URL: http://interpretable-ml.org/icml2020workshop »

Over the years, ML models have steadily grown in complexity, gaining predictivity often at the expense of interpretability. An active research area called explainable AI (or XAI) has emerged with the goal to produce models that are both predictive and understandable. XAI has reached important successes, such as robust heatmap-based explanations of DNN classifiers. From an application perspective, there is now a need to massively engage into new scenarios such as explaining unsupervised / reinforcement learning, as well as producing explanations that are optimally structured for the human. In particular, our planned workshop will cover the following topics:
- Explaining beyond DNN classifiers: random forests, unsupervised learning, reinforcement learning
- Explaining beyond heatmaps: structured explanations, Q/A and dialog systems, human-in-the-loop
- Explaining beyond explaining: Improving ML models and algorithms, verifying ML, getting insights

XAI has received an exponential interest in the research community, and awareness of the need to explain ML models have grown in similar proportions in industry and in the sciences. With the sizable XAI research community that has formed, there is now a key opportunity to achieve this push towards successful applications. Our hope is that our proposed XXAI workshop can accelerate this process, foster a more systematic use of XAI to produce improvement on models in applications, and finally, also serves to better identify in which way current XAI methods need to be improved and what kind of theory of XAI is needed.

Author Information

Wojciech Samek (Fraunhofer Heinrich Hertz Institute)
Andreas HOLZINGER (Medical University Graz)

Andreas Holzinger works on Human-Centered AI (HCAI), motivated by efforts to improve human health. Andreas pioneered in interactive machine learning with the human-in-the-loop. For his achievements, he was elected as a member of Academia Europea in 2019. Since 2021 he is a member of the European Laboratory for Learning and Intelligent Systems. Andreas is paving the way towards multimodal causability, promoting robust, interpretable and trustworthy medical AI, advocating for a synergistic approach to put the human-in-control of AI, and align AI with human values, privacy, security, and safety.

Ruth Fong (University of Oxford)
Taesup Moon (Sungkyunkwan University)
Klaus-robert Mueller (Technische Universität Berlin)

More from the Same Authors