Timezone: »
Oral
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong · Shibani Santurkar · Aleksander Madry
We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks. These networks remain highly accurate while also being more amenable to human interpretation, as we demonstrate quantitatively and via human experiments. We further illustrate how the resulting sparse explanations can help to identify spurious correlations, explain misclassifications, and diagnose model biases in vision and language tasks.
Author Information
Eric Wong (MIT)
Shibani Santurkar (MIT)
Aleksander Madry (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Leveraging Sparse Linear Layers for Debuggable Deep Networks »
Tue. Jul 20th 04:00 -- 06:00 PM Room
More from the Same Authors
-
2022 : A Game-Theoretic Perspective on Trust in Recommendation »
Sarah Cen · Andrew Ilyas · Aleksander Madry -
2023 Poster: Rethinking Backdoor Attacks »
Alaa Khaddaj · Guillaume Leclerc · Aleksandar Makelov · Kristian Georgiev · Andrew Ilyas · Hadi Salman · Aleksander Madry -
2023 Poster: Raising the Cost of Malicious AI-Powered Image Editing »
Hadi Salman · Alaa Khaddaj · Guillaume Leclerc · Andrew Ilyas · Aleksander Madry -
2023 Poster: Whose Opinions Do Language Models Reflect? »
Shibani Santurkar · Cinoo Lee · Esin Durmus · Faisal Ladhak · Tatsunori Hashimoto · Percy Liang -
2023 Poster: TRAK: Understanding Model Predictions at Scale »
Sung Min (Sam) Park · Kristian Georgiev · Andrew Ilyas · Guillaume Leclerc · Aleksander Madry -
2023 Poster: ModelDiff: A Framework for Comparing Learning Algorithms »
Harshay Shah · Sung Min (Sam) Park · Andrew Ilyas · Aleksander Madry -
2023 Oral: Raising the Cost of Malicious AI-Powered Image Editing »
Hadi Salman · Alaa Khaddaj · Guillaume Leclerc · Andrew Ilyas · Aleksander Madry -
2023 Oral: Whose Opinions Do Language Models Reflect? »
Shibani Santurkar · Cinoo Lee · Esin Durmus · Faisal Ladhak · Tatsunori Hashimoto · Percy Liang -
2023 Oral: TRAK: Understanding Model Predictions at Scale »
Sung Min (Sam) Park · Kristian Georgiev · Andrew Ilyas · Guillaume Leclerc · Aleksander Madry -
2022 Workshop: Principles of Distribution Shift (PODS) »
Elan Rosenfeld · Saurabh Garg · Shibani Santurkar · Jamie Morgenstern · Hossein Mobahi · Zachary Lipton · Andrej Risteski -
2022 : Panel discussion »
Steffen Schneider · Aleksander Madry · Alexei Efros · Chelsea Finn · Soheil Feizi -
2022 : Dr. Aleksander Madry's Talk »
Aleksander Madry -
2022 : Invited Talk 1: Aleksander Mądry »
Aleksander Madry -
2022 Workshop: New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Hima Lakkaraju · Sanmi Koyejo -
2022 Poster: Datamodels: Understanding Predictions with Data and Data with Predictions »
Andrew Ilyas · Sung Min (Sam) Park · Logan Engstrom · Guillaume Leclerc · Aleksander Madry -
2022 Poster: Adversarially trained neural representations are already as robust as biological neural representations »
Chong Guo · Michael Lee · Guillaume Leclerc · Joel Dapello · Yug Rao · Aleksander Madry · James DiCarlo -
2022 Oral: Adversarially trained neural representations are already as robust as biological neural representations »
Chong Guo · Michael Lee · Guillaume Leclerc · Joel Dapello · Yug Rao · Aleksander Madry · James DiCarlo -
2022 Spotlight: Datamodels: Understanding Predictions with Data and Data with Predictions »
Andrew Ilyas · Sung Min (Sam) Park · Logan Engstrom · Guillaume Leclerc · Aleksander Madry -
2022 Poster: Combining Diverse Feature Priors »
Saachi Jain · Dimitris Tsipras · Aleksander Madry -
2022 Spotlight: Combining Diverse Feature Priors »
Saachi Jain · Dimitris Tsipras · Aleksander Madry -
2021 : Invited Talk #4 »
Aleksander Madry -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2020 Poster: From ImageNet to Image Classification: Contextualizing Progress on Benchmarks »
Dimitris Tsipras · Shibani Santurkar · Logan Engstrom · Andrew Ilyas · Aleksander Madry -
2020 Poster: Identifying Statistical Bias in Dataset Replication »
Logan Engstrom · Andrew Ilyas · Shibani Santurkar · Dimitris Tsipras · Jacob Steinhardt · Aleksander Madry -
2019 Workshop: Identifying and Understanding Deep Learning Phenomena »
Hanie Sedghi · Samy Bengio · Kenji Hata · Aleksander Madry · Ari Morcos · Behnam Neyshabur · Maithra Raghu · Ali Rahimi · Ludwig Schmidt · Ying Xiao -
2019 : Panel Discussion (Nati Srebro, Dan Roy, Chelsea Finn, Mikhail Belkin, Aleksander Mądry, Jason Lee) »
Nati Srebro · Daniel Roy · Chelsea Finn · Mikhail Belkin · Aleksander Madry · Jason Lee -
2019 : Keynote by Aleksander Mądry: Are All Features Created Equal? »
Aleksander Madry -
2019 Poster: Exploring the Landscape of Spatial Robustness »
Logan Engstrom · Brandon Tran · Dimitris Tsipras · Ludwig Schmidt · Aleksander Madry -
2019 Oral: Exploring the Landscape of Spatial Robustness »
Logan Engstrom · Brandon Tran · Dimitris Tsipras · Ludwig Schmidt · Aleksander Madry -
2018 Poster: On the Limitations of First-Order Approximation in GAN Dynamics »
Jerry Li · Aleksander Madry · John Peebles · Ludwig Schmidt -
2018 Oral: On the Limitations of First-Order Approximation in GAN Dynamics »
Jerry Li · Aleksander Madry · John Peebles · Ludwig Schmidt -
2018 Poster: A Classification-Based Study of Covariate Shift in GAN Distributions »
Shibani Santurkar · Ludwig Schmidt · Aleksander Madry -
2018 Oral: A Classification-Based Study of Covariate Shift in GAN Distributions »
Shibani Santurkar · Ludwig Schmidt · Aleksander Madry -
2017 Poster: Deep Tensor Convolution on Multicores »
David Budden · Alexander Matveev · Shibani Santurkar · Shraman Ray Chaudhuri · Nir Shavit -
2017 Talk: Deep Tensor Convolution on Multicores »
David Budden · Alexander Matveev · Shibani Santurkar · Shraman Ray Chaudhuri · Nir Shavit