Timezone: »
Transformer based large language models with emergent capabilities are becoming increasingly ubiquitous in society. However, the task of understanding and interpreting their internal workings, in the context of adversarial attacks, remains largely unsolved. Gradient-based universal adversarial attacks have been shown to be highly effective on large language models and potentially dangerous due to their input-agnostic nature. This work presents a novel geometric perspective explaining universal adversarial attacks on large language models. By attacking the 117M parameter GPT-2 model, we find evidence indicating that universal adversarial triggers could be embedding vectors which merely approximate the semantic information in their adversarial training region. This hypothesis is supported by white-box model analysis comprising dimensionality reduction and similarity measurement of hidden representations. We believe this new geometric perspective on the underlying mechanism driving universal attacks could help us gain deeper insight into the internal workings and failure modes of LLMs, thus enabling their mitigation.
Author Information
Varshini Subhash (Harvard University)
Anna Bialas (Harvard University)
Siddharth Swaroop (Harvard University)
Weiwei Pan (Harvard University)
Finale Doshi-Velez (Harvard University)

Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability. Selected Additional Shinies: BECA recipient, AFOSR YIP and NSF CAREER recipient; Sloan Fellow; IEEE AI Top 10 to Watch
Related Events (a corresponding poster, oral, or spotlight)
-
2023 : Why do universal adversarial attacks work on large language models?: Geometry might be the answer »
Dates n/a. Room
More from the Same Authors
-
2021 : Promises and Pitfalls of Black-Box Concept Learning Models »
· Anita Mahinpei · Justin Clark · Isaac Lage · Finale Doshi-Velez · Weiwei Pan -
2021 : Prediction-focused Mixture Models »
Abhishek Sharma · Sanjana Narayanan · Catherine Zeng · Finale Doshi-Velez -
2021 : Online structural kernel selection for mobile health »
Eura Shin · Predag Klasnja · Susan Murphy · Finale Doshi-Velez -
2021 : Interpretable learning-to-defer for sequential decision-making »
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez -
2021 : Interpretable learning-to-defer for sequential decision-making »
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez -
2021 : On formalizing causal off-policy sequential decision-making »
Sonali Parbhoo · Shalmali Joshi · Finale Doshi-Velez -
2022 : Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare »
Shengpu Tang · Maggie Makar · Michael Sjoding · Finale Doshi-Velez · Jenna Wiens -
2022 : From Soft Trees to Hard Trees: Gains and Losses »
Xin Zeng · Jiayu Yao · Finale Doshi-Velez · Weiwei Pan -
2022 : Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry »
Mark Penrod · Harrison Termotto · Varshini Reddy · Jiayu Yao · Finale Doshi-Velez · Weiwei Pan -
2023 : Implications of Gaussian process kernel mismatch for out-of-distribution data »
Beau Coker · Finale Doshi-Velez -
2023 : Inverse Transition Learning for Characterizing Near-Optimal Dynamics in Offline Reinforcement Learning »
Leo Benac · Sonali Parbhoo · Finale Doshi-Velez -
2023 : Discovering User Types: Characterization of User Traits by Task-Specific Behaviors in Reinforcement Learning »
Lars L. Ankile · Brian Ham · Kevin Mao · Eura Shin · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 : Adaptive interventions for both accuracy and time in AI-assisted human decision making »
Siddharth Swaroop · Zana Buçinca · Krzysztof Gajos · Finale Doshi-Velez -
2023 : SAP-sLDA: An Interpretable Interface for Exploring Unstructured Text »
Charumathi Badrinath · Weiwei Pan · Finale Doshi-Velez -
2023 : Signature Activation: A Sparse Signal View for Holistic Saliency »
Jose Tello Ayala · Akl Fahed · Weiwei Pan · Eugene Pomerantsev · Patrick Ellinor · Anthony Philippakis · Finale Doshi-Velez -
2023 : Signature Activation: A Sparse Signal View for Holistic Saliency »
Jose Tello Ayala · Akl Fahed · Weiwei Pan · Eugene Pomerantsev · Patrick Ellinor · Anthony Philippakis · Finale Doshi-Velez -
2023 : Implications of kernel mismatch for OOD data »
Beau Coker · Finale Doshi-Velez -
2023 : Soft prompting might be a bug, not a feature »
Luke Bailey · Gustaf Ahdritz · Anat Kleiman · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 : Memory Maps to Understand Models »
Dharmesh Tailor · Paul Chang · Siddharth Swaroop · Eric Nalisnick · Arno Solin · Khan Emtiyaz -
2023 : Bayesian Inverse Transition Learning for Offline Settings »
Leo Benac · Sonali Parbhoo · Finale Doshi-Velez -
2023 : Discovering User Types: Characterization of User Traits by Task-Specific Behaviors in Reinforcement Learning »
Lars L. Ankile · Brian Ham · Kevin Mao · Eura Shin · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 : Discovering User Types: Characterization of User Traits by Task-Specific Behaviors in Reinforcement Learning »
Lars L. Ankile · Brian Ham · Kevin Mao · Eura Shin · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 Poster: The Unintended Consequences of Discount Regularization: Improving Regularization in Certainty Equivalence Reinforcement Learning »
Sarah Rathnam · Sonali Parbhoo · Weiwei Pan · Susan Murphy · Finale Doshi-Velez -
2023 Poster: Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables »
Yaniv Yacoby · Weiwei Pan · Finale Doshi-Velez -
2022 : Responsible Decision-Making in Batch RL Settings »
Finale Doshi-Velez -
2021 : RL Explainability & Interpretability Panel »
Ofra Amir · Finale Doshi-Velez · Alan Fern · Zachary Lipton · Omer Gottesman · Niranjani Prasad -
2021 : [01:50 - 02:35 PM UTC] Invited Talk 3: Interpretability in High Dimensions: Concept Bottlenecks and Beyond »
Finale Doshi-Velez -
2021 : Invited Oral1:Q&A »
Siddharth Swaroop -
2021 Poster: Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement »
Andrew Ross · Finale Doshi-Velez -
2021 Oral: Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement »
Andrew Ross · Finale Doshi-Velez -
2021 Poster: State Relevance for Off-Policy Evaluation »
Simon Shen · Yecheng Jason Ma · Omer Gottesman · Finale Doshi-Velez -
2021 Spotlight: State Relevance for Off-Policy Evaluation »
Simon Shen · Yecheng Jason Ma · Omer Gottesman · Finale Doshi-Velez -
2020 : Contributed Talk: Continual Deep Learning by Functional Regularisation of Memorable Past »
Siddharth Swaroop -
2020 : Keynote #2 Finale Doshi-Velez »
Finale Doshi-Velez -
2020 Poster: Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions »
Omer Gottesman · Joseph Futoma · Yao Liu · Sonali Parbhoo · Leo Celi · Emma Brunskill · Finale Doshi-Velez -
2019 Poster: Combining parametric and nonparametric models for off-policy evaluation »
Omer Gottesman · Yao Liu · Scott Sussex · Emma Brunskill · Finale Doshi-Velez -
2019 Oral: Combining parametric and nonparametric models for off-policy evaluation »
Omer Gottesman · Yao Liu · Scott Sussex · Emma Brunskill · Finale Doshi-Velez -
2018 Poster: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2018 Poster: Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors »
Soumya Ghosh · Jiayu Yao · Finale Doshi-Velez -
2018 Oral: Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors »
Soumya Ghosh · Jiayu Yao · Finale Doshi-Velez -
2018 Oral: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2017 Tutorial: Interpretable Machine Learning »
Been Kim · Finale Doshi-Velez