Workshop
|
|
Generating Global Factual and Counterfactual Explainer for Molecule under Domain Constraints
Danqing Wang · Antonis Antoniades · Ambuj Singh · Lei Li
|
|
Workshop
|
|
Accurate, Explainable, and Private Models: Providing Recourse While Minimizing Training Data Leakage
|
|
Workshop
|
|
Why do universal adversarial attacks work on large language models?: Geometry might be the answer
|
|
Workshop
|
|
Scoring Black-Box Models for Adversarial Robustness
Jian Vora · Pranay Reddy Samala
|
|
Workshop
|
|
Adversarial Attacks and Defenses in Explainable Artificial Intelligence: A Survey
|
|
Workshop
|
|
Don't trust your eyes: on the (un)reliability of feature visualizations
|
|
Workshop
|
|
State trajectory abstraction and visualization method for explainability in reinforcement learning
Yoshiki Takagi · roderick tabalba · Jason Leigh
|
|
Workshop
|
Fri 14:10
|
Invited talk: Rajesh Ranganath - Have we learned to explain?
|
|
Workshop
|
Sat 13:30
|
Himabindu Lakkaraju - Regulating Explainable AI: Technical Challenges and Opportunities
Hima Lakkaraju
|
|
Workshop
|
|
Adversarial Attacks and Defenses in Explainable Artificial Intelligence: A Survey
Hubert Baniecki · Przemyslaw Biecek
|
|
Workshop
|
Fri 14:10
|
Explaining Graph Neural Networks Using Interpretable Local Surrogates
Farzaneh Heidari · Guillaume Rabusseau · Perouz Taslakian
|
|
Workshop
|
|
Toward Practical Automatic Speech Recognition and Post-Processing: a Call for Explainable Error Benchmark Guideline
Seonmin Koo · Chanjun Park · Jinsung Kim · Jaehyung Seo · Sugyeong Eo · Hyeonseok Moon · HEUISEOK LIM
|
|