Poster
|
Thu 13:30
|
Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning
Junyi Zhu · Ruicong Yao · Matthew B Blaschko
|
|
Workshop
|
|
Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models
Francesco Croce · Naman Singh · Matthias Hein
|
|
Workshop
|
|
Feature Partition Aggregation: A Fast Certified Defense Against a Union of ℓ0 Attacks
|
|
Workshop
|
|
When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
Fnu Suya · Xiao Zhang · Yuan Tian · David Evans
|
|
Workshop
|
|
Physics-oriented adversarial attacks on SAR image target recognition
Jiahao Cui · wang Guo · Run Shao · tiandong Shi · Haifeng Li
|
|
Poster
|
Tue 17:00
|
Are Diffusion Models Vulnerable to Membership Inference Attacks?
Jinhao Duan · Fei Kong · Shiqi Wang · Xiaoshuang Shi · Kaidi Xu
|
|
Workshop
|
|
Unsupervised Adversarial Detection without Extra Model: Training Loss Should Change
Chien Cheng Chyou · Hung-Ting Su · Winston Hsu
|
|
Workshop
|
|
Evading Black-box Classifiers Without Breaking Eggs
|
|
Poster
|
Thu 13:30
|
Run-off Election: Improved Provable Defense against Data Poisoning Attacks
Keivan Rezaei · Kiarash Banihashem · Atoosa Malemir Chegini · Soheil Feizi
|
|
Workshop
|
|
Why do universal adversarial attacks work on large language models?: Geometry might be the answer
Varshini Subhash · Anna Bialas · Siddharth Swaroop · Weiwei Pan · Finale Doshi-Velez
|
|
Workshop
|
|
Creating a Bias-Free Dataset of Food Delivery App Reviews with Data Poisoning Attacks
Hyunmin Lee · SeungYoung Oh · JinHyun Han · Hyunggu Jung
|
|
Workshop
|
|
Black Box Adversarial Prompting for Foundation Models
Natalie Maus · Patrick Chao · Eric Wong · Jacob Gardner
|
|