Skip to yearly menu bar Skip to main content


Search All 2023 Events
 

42 Results

<<   <   Page 2 of 4   >   >>
Workshop
Unsupervised Adversarial Detection without Extra Model: Training Loss Should Change
Workshop
Rethinking Label Poisoning for GNNs: Pitfalls and Attacks
Poster
Thu 13:30 UMD: Unsupervised Model Detection for X2X Backdoor Attacks
Zhen Xiang · Zidi Xiong · Bo Li
Workshop
Why do universal adversarial attacks work on large language models?: Geometry might be the answer
Workshop
Like Oil and Water: Group Robustness and Poisoning Defenses Don’t Mix
Workshop
Title: Ignore the Law: The Legal Risks of Prompt Injection Attacks on Large Language Models; Author(s): Ram Shankar Siva Kumar, Jonathon Penney
Workshop
Backdoor Attacks for In-Context Learning with Language Models
Workshop
Generative Autoencoders as Watermark Attackers: Analyses of Vulnerabilities and Threats
Xuandong Zhao · Kexun Zhang · Yu-Xiang Wang · Lei Li
Workshop
Introducing Vision into Large Language Models Expands Attack Surfaces and Failure Implications
Workshop
Transferable Adversarial Perturbations between Self-Supervised Speech Recognition Models
Raphaël Olivier · Hadi Abdullah · Bhiksha Raj
Workshop
Black Box Adversarial Prompting for Foundation Models
Workshop
Rethinking Label Poisoning for GNNs: Pitfalls and Attacks
Vijay Lingam · Mohammad Sadegh Akhondzadeh · Aleksandar Bojchevski