Workshop
|
|
TabCBM: Concept-based Interpretable Neural Networks for Tabular Data
Mateo Espinosa Zarlenga · Zohreh Shams · Michael Nelson · Been Kim · Mateja Jamnik
|
|
Workshop
|
|
Why do universal adversarial attacks work on large language models?: Geometry might be the answer
Varshini Subhash · Anna Bialas · Siddharth Swaroop · Weiwei Pan · Finale Doshi-Velez
|
|
Workshop
|
|
HateXplain2.0: An Explainable Hate Speech Detection Framework Utilizing Subjective Projection from Contextual Knowledge Space to Disjoint Concept Space
Md Fahim · Md Shihab Shahriar · Sabik Irbaz · Syed Ishtiaque Ahmed · Mohammad Ruhul Amin
|
|
Workshop
|
|
Eye-tracking of clinician behaviour with explainable AI decision support: a high-fidelity simulation study
Myura Nagendran · Paul Festor · Matthieu Komorowski · Anthony Gordon · Aldo Faisal
|
|
Workshop
|
|
Describe, Explain, Plan and Select: Interactive Planning with LLMs Enables Open-World Multi-Task Agents
Zihao Wang · Shaofei Cai · Guanzhou Chen · Anji Liu · Xiaojian Ma · Yitao Liang
|
|
Workshop
|
|
Is Task-Agnostic Explainable AI a Myth?
Alicja Chaszczewicz
|
|
Workshop
|
|
A Unifying Framework to the Analysis of Interaction Methods using Synergy Functions
Daniel Lundstrom · Ali Ghafelebashi · Meisam Razaviyayn
|
|
Workshop
|
|
Are Good Explainers Secretly Human-in-the-Loop Active Learners?
Emma Thuong Nguyen · Abhishek Ghose
|
|
Workshop
|
Fri 14:10
|
Learning to Explain Hypergraph Neural Networks
Sepideh Maleki · Ehsan Hajiramezanali · Gabriele Scalia · Tommaso Biancalani · Kangway Chuang
|
|
Workshop
|
Fri 18:20
|
Invited talk: Dr. Judy Gichoya - Titie: Harnessing the ability of AI models to detect hidden signals - how can we explain these findings?
|
|
Workshop
|
|
Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability
Usha Bhalla · Suraj Srinivas · Himabindu Lakkaraju
|
|
Workshop
|
|
Is Task-Agnostic Explainable AI a Myth?
Alicja Chaszczewicz
|
|