Workshop
|
|
Introducing Vision into Large Language Models Expands Attack Surfaces and Failure Implications
|
|
Workshop
|
Fri 17:00
|
Visual Adversarial Examples Jailbreak Aligned Large Language Models
Xiangyu Qi · Kaixuan Huang · Ashwinee Panda · Mengdi Wang · Prateek Mittal
|
|
Poster
|
Wed 17:00
|
Identification of the Adversary from a Single Adversarial Example
Minhao Cheng · Rui Min · Haochen Sun · Pin-Yu Chen
|
|
Workshop
|
|
On feasibility of intent obfuscating attacks
ZhaoBin Li · Patrick Shafto
|
|
Workshop
|
|
On feasibility of intent obfuscating attacks
|
|
Poster
|
Tue 17:00
|
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples
Chumeng Liang · Xiaoyu Wu · Yang Hua · Jiaru Zhang · Yiming Xue · Tao Song · Zhengui XUE · Ruhui Ma · Haibing Guan
|
|
Oral
|
Tue 20:38
|
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples
Chumeng Liang · Xiaoyu Wu · Yang Hua · Jiaru Zhang · Yiming Xue · Tao Song · Zhengui XUE · Ruhui Ma · Haibing Guan
|
|
Workshop
|
Fri 19:00
|
On the Relationship Between Data Manifolds and Adversarial Examples
Michael Geyer · Brian Bell · Amanda Fernandez · Juston Moore
|
|
Workshop
|
|
Evading Black-box Classifiers Without Breaking Eggs
|
|
Workshop
|
Fri 13:10
|
Evading Black-box Classifiers Without Breaking Eggs
Edoardo Debenedetti · Nicholas Carlini · Florian Tramer
|
|