Poster
|
Tue 17:00
|
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
Yiwei Lu · Gautam Kamath · Yaoliang Yu
|
|
Poster
|
Thu 16:30
|
Poisoning Language Models During Instruction Tuning
Alexander Wan · Eric Wallace · Sheng Shen · Dan Klein
|
|
Poster
|
Thu 16:30
|
LeadFL: Client Self-Defense against Model Poisoning in Federated Learning
Chaoyi Zhu · Stefanie Roos · Lydia Y. Chen
|
|
Poster
|
Wed 14:00
|
Exploring Model Dynamics for Accumulative Poisoning Discovery
Jianing Zhu · Xiawei Guo · Jiangchao Yao · Chao Du · LI He · Shuo Yuan · Tongliang Liu · Liang Wang · Bo Han
|
|
Workshop
|
|
Teach GPT To Phish
|
|
Workshop
|
|
Teach GPT To Phish
Ashwinee Panda · Zhengming Zhang · Yaoqing Yang · Prateek Mittal
|
|
Workshop
|
|
Localizing Partial Model for Personalized Federated Learning
Heewon Park · Miru Kim · Minhae Kwon
|
|
Workshop
|
|
Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models
|
|
Workshop
|
|
A physics-orientd method for attacking SAR images using salient regions
|
|
Workshop
|
|
Establishing a Benchmark for Adversarial Robustness of Compressed Deep Learning Models after Pruning
|
|
Workshop
|
|
Transferable Adversarial Perturbations between Self-Supervised Speech Recognition Models
|
|
Workshop
|
|
When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
|
|