Skip to yearly menu bar Skip to main content


Search All 2023 Events
 

13 Results

<<   <   Page 1 of 2   >   >>
Workshop
When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
Workshop
Rethinking Label Poisoning for GNNs: Pitfalls and Attacks
Workshop
Like Oil and Water: Group Robustness and Poisoning Defenses Don’t Mix
Poster
Tue 17:00 Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
Yiwei Lu · Gautam Kamath · Yaoliang Yu
Workshop
Rethinking Label Poisoning for GNNs: Pitfalls and Attacks
Vijay Lingam · Mohammad Sadegh Akhondzadeh · Aleksandar Bojchevski
Workshop
Feature Partition Aggregation: A Fast Certified Defense Against a Union of 0 Attacks
Workshop
When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
Fnu Suya · Xiao Zhang · Yuan Tian · David Evans
Workshop
Localizing Partial Model for Personalized Federated Learning
Heewon Park · Miru Kim · Minhae Kwon
Poster
Thu 13:30 Run-off Election: Improved Provable Defense against Data Poisoning Attacks
Keivan Rezaei · Kiarash Banihashem · Atoosa Malemir Chegini · Soheil Feizi
Workshop
Creating a Bias-Free Dataset of Food Delivery App Reviews with Data Poisoning Attacks
Hyunmin Lee · SeungYoung Oh · JinHyun Han · Hyunggu Jung
Workshop
Feature Partition Aggregation: A Fast Certified Defense Against a Union of 0 Attacks
Zayd S Hammoudeh · Daniel Lowd
Workshop
Like Oil and Water: Group Robustness and Poisoning Defenses Don’t Mix
Michael-Andrei Panaitescu-Liess · Yigitcan Kaya · Tudor Dumitras