Skip to yearly menu bar Skip to main content


Search All 2023 Events
 

50 Results

<<   <   Page 1 of 5   >   >>
Workshop
When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
Fnu Suya · Xiao Zhang · Yuan Tian · David Evans
Workshop
CertViT: Certified Robustness of Pre-Trained Vision Transformers
Kavya Gupta · Sagar Verma
Workshop
A physics-orientd method for attacking SAR images using salient regions
Workshop
Shrink & Cert: Bi-level Optimization for Certified Robustness
Poster
Tue 17:00 Understanding and Defending Patched-based Adversarial Attacks for Vision Transformer
Liang Liu · Yanan Guo · Youtao Zhang · Jun Yang
Workshop
Fri 13:10 Evading Black-box Classifiers Without Breaking Eggs
Edoardo Debenedetti · Nicholas Carlini · Florian Tramer
Poster
Wed 17:00 Adversarial Parameter Attack on Deep Neural Networks
Lijia Yu · Yihan Wang · Xiao-Shan Gao
Workshop
When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
Workshop
Characterizing the Optimal 01 Loss for Multi-class Classification with a Test-time Attacker
Workshop
Like Oil and Water: Group Robustness and Poisoning Defenses Don’t Mix
Workshop
Shrink & Cert: Bi-level Optimization for Certified Robustness
Kavya Gupta · Sagar Verma
Workshop
Fri 18:20 One Pixel Adversarial Attacks via Sketched Programs
Tom Yuviler · Dana Drachsler-Cohen