Timezone: »
We observe that the mapping between an image's representation in one model to its representation in another can be learned surprisingly well with just a linear layer, even across diverse models. Building on this observation, we propose text-to-concept, where features from a fixed pretrained model are aligned linearly to the CLIP space, so that text embeddings from CLIP's text encoder become directly comparable to the aligned features. With text-to-concept, we convert fixed off-the-shelf vision encoders to surprisingly strong zero-shot classifiers for free, with accuracy at times even surpassing that of CLIP, despite being much smaller models and trained on a small fraction of the data compared to CLIP. We show other immediate use-cases of text-to-concept, like building concept bottleneck models with no concept supervision, diagnosing distribution shifts in terms of human concepts, and retrieving images satisfying a set of text-based constraints. Lastly, we demonstrate the feasibility of concept-to-text, where vectors in a model's feature space are decoded by first aligning to the CLIP before being fed to a GPT-based generative model. Our work suggests existing deep models, with presumably diverse architectures and training, represent input samples relatively similarly, and a two-way communication across model representation spaces and to humans (through language) is viable.
Author Information
Mazda Moayeri (University of Maryland)
Keivan Rezaei (University of Maryland)
Maziar Sanjabi (Meta AI)
Soheil Feizi (University of Maryland)
More from the Same Authors
-
2022 : Towards Better Understanding of Self-Supervised Representations »
Neha Mukund Kalibhat · Kanika Narang · Hamed Firooz · Maziar Sanjabi · Soheil Feizi -
2022 : BARACK: Partially Supervised Group Robustness With Guarantees »
Nimit Sohoni · Maziar Sanjabi · Nicolas Ballas · Aditya Grover · Shaoliang Nie · Hamed Firooz · Christopher Re -
2022 : Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 : Certifiably Robust Multi-Agent Reinforcement Learning against Adversarial Communication »
Yanchao Sun · Ruijie Zheng · Parisa Hassanzadeh · Yongyuan Liang · Soheil Feizi · Sumitra Ganesh · Furong Huang -
2023 Poster: Run-off Election: Improved Provable Defense against Data Poisoning Attacks »
Keivan Rezaei · Kiarash Banihashem · Atoosa Malemir Chegini · Soheil Feizi -
2023 Poster: Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano »
Chuan Guo · Alexandre Sablayrolles · Maziar Sanjabi -
2023 Poster: Identifying Interpretable Subspaces in Image Representations »
Neha Mukund Kalibhat · Shweta Bhardwaj · C. Bayan Bruss · Hamed Firooz · Maziar Sanjabi · Soheil Feizi -
2022 : Panel discussion »
Steffen Schneider · Aleksander Madry · Alexei Efros · Chelsea Finn · Soheil Feizi -
2022 : Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 : Toward Efficient Robust Training against Union of Lp Threat Models »
Gaurang Sriramanan · Maharshi Gor · Soheil Feizi -
2022 Poster: Federated Learning with Partial Model Personalization »
Krishna Pillutla · Kshitiz Malik · Abdel-rahman Mohamed · Michael Rabbat · Maziar Sanjabi · Lin Xiao -
2022 Spotlight: Federated Learning with Partial Model Personalization »
Krishna Pillutla · Kshitiz Malik · Abdel-rahman Mohamed · Michael Rabbat · Maziar Sanjabi · Lin Xiao -
2022 Poster: Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 Poster: FOCUS: Familiar Objects in Common and Uncommon Settings »
Priyatham Kattakinda · Soheil Feizi -
2022 Spotlight: Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 Spotlight: FOCUS: Familiar Objects in Common and Uncommon Settings »
Priyatham Kattakinda · Soheil Feizi -
2022 Poster: UNIREX: A Unified Learning Framework for Language Model Rationale Extraction »
Aaron Chan · Maziar Sanjabi · Lambert Mathias · Liang Tan · Shaoliang Nie · Xiaochang Peng · Xiang Ren · Hamed Firooz -
2022 Spotlight: UNIREX: A Unified Learning Framework for Language Model Rationale Extraction »
Aaron Chan · Maziar Sanjabi · Lambert Mathias · Liang Tan · Shaoliang Nie · Xiaochang Peng · Xiang Ren · Hamed Firooz -
2021 : Invited Talk 6: T​owards Understanding Foundations of Robust Learning »
Soheil Feizi -
2021 Poster: Improved, Deterministic Smoothing for L_1 Certified Robustness »
Alexander Levine · Soheil Feizi -
2021 Poster: Skew Orthogonal Convolutions »
Sahil Singla · Soheil Feizi -
2021 Spotlight: Skew Orthogonal Convolutions »
Sahil Singla · Soheil Feizi -
2021 Oral: Improved, Deterministic Smoothing for L_1 Certified Robustness »
Alexander Levine · Soheil Feizi -
2020 Poster: Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness »
Aounon Kumar · Alexander Levine · Tom Goldstein · Soheil Feizi -
2020 Poster: Second-Order Provable Defenses against Adversarial Attacks »
Sahil Singla · Soheil Feizi -
2020 Poster: On Second-Order Group Influence Functions for Black-Box Predictions »
Samyadeep Basu · Xuchen You · Soheil Feizi -
2019 Poster: Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation »
Sahil Singla · Eric Wallace · Shi Feng · Soheil Feizi -
2019 Oral: Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation »
Sahil Singla · Eric Wallace · Shi Feng · Soheil Feizi -
2019 Poster: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2019 Oral: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi