Toggle Poster Visibility
Oral
Tue Jun 11 11:00 AM -- 11:20 AM (PDT) @ Grand Ballroom
Adversarial Attacks on Node Embeddings via Graph Poisoning
Oral
Tue Jun 11 11:20 AM -- 11:25 AM (PDT) @ Grand Ballroom
First-Order Adversarial Vulnerability of Neural Networks and Input Dimension
Oral
Tue Jun 11 11:25 AM -- 11:30 AM (PDT) @ Grand Ballroom
On Certifying Non-Uniform Bounds against Adversarial Attacks
Oral
Tue Jun 11 11:30 AM -- 11:35 AM (PDT) @ Grand Ballroom
Improving Adversarial Robustness via Promoting Ensemble Diversity
Oral
Tue Jun 11 11:35 AM -- 11:40 AM (PDT) @ Grand Ballroom
Adversarial camera stickers: A physical camera-based attack on deep learning systems
Oral
Tue Jun 11 11:40 AM -- 12:00 PM (PDT) @ Grand Ballroom
Adversarial examples from computational constraints
[
Video]
Oral
Tue Jun 11 12:00 PM -- 12:05 PM (PDT) @ Grand Ballroom
POPQORN: Quantifying Robustness of Recurrent Neural Networks
Oral
Tue Jun 11 12:05 PM -- 12:10 PM (PDT) @ Grand Ballroom
Using Pre-Training Can Improve Model Robustness and Uncertainty
Oral
Tue Jun 11 12:10 PM -- 12:15 PM (PDT) @ Grand Ballroom
Generalized No Free Lunch Theorem for Adversarial Robustness
Oral
Tue Jun 11 12:15 PM -- 12:20 PM (PDT) @ Grand Ballroom
PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach