Timezone: »
A discriminatively trained neural net classifier can fit the training data perfectly if all information about its input other than class membership has been discarded prior to the output layer. Surprisingly, past research has discovered that some extraneous visual detail remains in the unnormalized logits. This finding is based on inversion techniques that map deep embeddings back to images. We explore this phenomenon further using a novel synthesis of methods, yielding a feedforward inversion model that produces remarkably high fidelity reconstructions, qualitatively superior to those of past efforts. When applied to an adversarially robust classifier model, the reconstructions contain sufficient local detail and global structure that they might be confused with the original image in a quick glance, and the object category can clearly be gleaned from the reconstruction. Our approach is based on BigGAN (Brock, 2019), with conditioning on logits instead of one-hot class labels. We use our reconstruction model as a tool for exploring the nature of representations, including: the influence of model architecture and training objectives (specifically robust losses), the forms of invariance that networks achieve, representational differences between correctly and incorrectly classified images, and the effects of manipulating logits and images. We believe that our method can inspire future investigations into the nature of information flow in a neural net and can provide diagnostics for improving discriminative models. We provide pre-trained models and visualizations at \url{https://sites.google.com/view/understanding-invariance/home}.
Author Information
Piotr Teterwak (Boston University)
Chiyuan Zhang (Google Research)
Dilip Krishnan (Google Research)
Michael Mozer (Google Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Understanding Invariance via Feedforward Inversion of Discriminatively Trained Classifiers »
Thu. Jul 22nd 04:00 -- 06:00 AM Room Virtual
More from the Same Authors
-
2021 : Randomized Response with Prior and Applications to Learning with Label Differential Privacy »
Badih Ghazi · Noah Golowich · Ravi Kumar · Pasin Manurangsi · Chiyuan Zhang -
2022 : Learning to induce causal structure »
Rosemary Nan Ke · Silvia Chiappa · Jane Wang · Jorg Bornschein · Anirudh Goyal · Melanie Rey · Matthew Botvinick · Theophane Weber · Michael Mozer · Danilo J. Rezende -
2023 : Counterfactual Memorization in Neural Language Models »
Chiyuan Zhang · Daphne Ippolito · Katherine Lee · Matthew Jagielski · Florian Tramer · Nicholas Carlini -
2023 : On the Reproducibility of Data Valuation under Learning Stochasticity »
Jiachen Wang · Feiyang Kang · Chiyuan Zhang · Ruoxi Jia · Prateek Mittal -
2023 Poster: Can Neural Network Memorization Be Localized? »
Pratyush Maini · Michael Mozer · Hanie Sedghi · Zachary Lipton · Zico Kolter · Chiyuan Zhang -
2023 Poster: Discrete Key-Value Bottleneck »
Frederik Träuble · Anirudh Goyal · Nasim Rahaman · Michael Mozer · Kenji Kawaguchi · Yoshua Bengio · Bernhard Schölkopf -
2023 Poster: On User-Level Private Convex Optimization »
Badih Ghazi · Pritish Kamath · Ravi Kumar · Pasin Manurangsi · Raghu Meka · Chiyuan Zhang -
2022 Workshop: Knowledge Retrieval and Language Models »
Maithra Raghu · Urvashi Khandelwal · Chiyuan Zhang · Matei Zaharia · Alexander Rush -
2022 Poster: Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning »
Utku Evci · Vincent Dumoulin · Hugo Larochelle · Michael Mozer -
2022 Oral: Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning »
Utku Evci · Vincent Dumoulin · Hugo Larochelle · Michael Mozer -
2021 Poster: Characterizing Structural Regularities of Labeled Data in Overparameterized Models »
Ziheng Jiang · Chiyuan Zhang · Kunal Talwar · Michael Mozer -
2021 Oral: Characterizing Structural Regularities of Labeled Data in Overparameterized Models »
Ziheng Jiang · Chiyuan Zhang · Kunal Talwar · Michael Mozer -
2020 Poster: Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules »
Sarthak Mittal · Alex Lamb · Anirudh Goyal · Vikram Voleti · Murray Shanahan · Guillaume Lajoie · Michael Mozer · Yoshua Bengio -
2019 Poster: State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations »
Alex Lamb · Jonathan Binas · Anirudh Goyal · Sandeep Subramanian · Ioannis Mitliagkas · Yoshua Bengio · Michael Mozer -
2019 Oral: State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations »
Alex Lamb · Jonathan Binas · Anirudh Goyal · Sandeep Subramanian · Ioannis Mitliagkas · Yoshua Bengio · Michael Mozer -
2018 Poster: Machine Theory of Mind »
Neil Rabinowitz · Frank Perbet · Francis Song · Chiyuan Zhang · S. M. Ali Eslami · Matthew Botvinick -
2018 Oral: Machine Theory of Mind »
Neil Rabinowitz · Frank Perbet · Francis Song · Chiyuan Zhang · S. M. Ali Eslami · Matthew Botvinick