Skip to yearly menu bar Skip to main content


Search All 2023 Events
 

74 Results

<<   <   Page 2 of 7   >   >>
Workshop
When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
Workshop
Sharpness-Aware Minimization Alone can Improve Adversarial Robustness
Zeming Wei · Jingyu Zhu · Yihao Zhang
Workshop
A Theoretical Perspective on the Robustness of Feature Extractors
Workshop
Scoring Black-Box Models for Adversarial Robustness
Workshop
R-LPIPS: An Adversarially Robust Perceptual Similarity Metric
Workshop
Characterizing the Optimal 01 Loss for Multi-class Classification with a Test-time Attacker
Workshop
Certified Calibration: Bounding Worst-Case Calibration under Adversarial Attacks
Workshop
Expressivity of Graph Neural Networks Through the Lens of Adversarial Robustness
Workshop
Like Oil and Water: Group Robustness and Poisoning Defenses Don’t Mix
Poster
Thu 16:30 Adversarial robustness of amortized Bayesian inference
Manuel Gloeckler · Michael Deistler · Jakob Macke
Affinity Workshop
Mon 19:15 Is ReLU Adversarially Robust?
Korn Sooksatra · Greg Hamerly · Pablo Rivas
Poster
Thu 16:30 Understanding the Impact of Adversarial Robustness on Accuracy Disparity
Yuzheng Hu · Fan Wu · Hongyang Zhang · Han Zhao