Filter by Keyword:

104 Results

Spotlight
Tue 5:45 SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation
Wuxinlin Cheng, Chenhui Deng, Zhiqiang Zhao, Yaohui Cai, Zhiru Zhang, Zhuo Feng
Poster
Tue 9:00 SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation
Wuxinlin Cheng, Chenhui Deng, Zhiqiang Zhao, Yaohui Cai, Zhiru Zhang, Zhuo Feng
Spotlight
Wed 5:40 Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?
Anna-Kathrin Kopetzki, Bertrand Charpentier, Daniel Zügner, Sandhya Giri, Stephan Günnemann
Spotlight
Wed 7:25 Generalised Lipschitz Regularisation Equals Distributional Robustness
Zac Cranko, Zhan Shi, Xinhua Zhang, Richard Nock, Simon Kornblith
Affinity Workshop
Wed 8:30 On the (Un-)Avoidability of Adversarial Examples
Sadia Chowdhury
Poster
Wed 9:00 Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?
Anna-Kathrin Kopetzki, Bertrand Charpentier, Daniel Zügner, Sandhya Giri, Stephan Günnemann
Poster
Wed 9:00 Generalised Lipschitz Regularisation Equals Distributional Robustness
Zac Cranko, Zhan Shi, Xinhua Zhang, Richard Nock, Simon Kornblith
Spotlight
Thu 5:20 Adversarial Robustness Guarantees for Random Deep Neural Networks
Giacomo De Palma, Bobak T Kiani, Seth Lloyd
Oral
Thu 6:00 Improved, Deterministic Smoothing for L_1 Certified Robustness
Alexander Levine, Soheil Feizi
Spotlight
Thu 6:20 Mixed Nash Equilibria in the Adversarial Examples Game
Laurent Meunier, Meyer Scetbon, Rafael Pinot, Jamal Atif, Yann Chevaleyre
Spotlight
Thu 6:25 Learning to Generate Noise for Multi-Attack Robustness
Divyam Madaan, Jinwoo Shin, Sung Ju Hwang
Spotlight
Thu 6:30 Query Complexity of Adversarial Attacks
Grzegorz Gluch, Rüdiger Urbanke
Spotlight
Thu 6:35 Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling
Ozan Özdenizci, Robert Legenstein
Spotlight
Thu 6:40 Efficient Training of Robust Decision Trees Against Adversarial Examples
Daniël Vos, Sicco Verwer
Spotlight
Thu 6:45 Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks
Xin Zhao, Zeru Zhang, Zijie Zhang, Lingfei Wu, Jiayin Jin, Yang Zhou, Ruoming Jin, Dejing Dou, Da Yan
Oral
Thu 7:00 CARTL: Cooperative Adversarially-Robust Transfer Learning
Dian Chen, Hongxin Hu, Qian Wang, Li Yinli, Cong Wang, Chao Shen, Qi Li
Spotlight
Thu 7:20 Skew Orthogonal Convolutions
Sahil Singla, Soheil Feizi
Spotlight
Thu 7:25 Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Arjun Nitin Bhagoji, Daniel Cullina, Vikash Sehwag, Prateek Mittal
Spotlight
Thu 7:30 Defense against backdoor attacks via robust covariance estimation
Jonathan Hayase, Weihao Kong, Raghav Somani, Sewoong Oh
Spotlight
Thu 7:35 Adversarial Purification with Score-based Generative Models
Jongmin Yoon, Sung Ju Hwang, Juho Lee
Spotlight
Thu 7:40 Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks
Nezihe Merve Gürel, Xiangyu Qi, Luka Rimanic, Ce Zhang, Bo Li
Spotlight
Thu 7:45 To be Robust or to be Fair: Towards Fairness in Adversarial Training
Han Xu, Xiaorui Liu, Yaxin Li, Anil Jain, Jiliang Tang
Poster
Thu 9:00 Efficient Training of Robust Decision Trees Against Adversarial Examples
Daniël Vos, Sicco Verwer
Poster
Thu 9:00 Adversarial Robustness Guarantees for Random Deep Neural Networks
Giacomo De Palma, Bobak T Kiani, Seth Lloyd
Poster
Thu 9:00 Adversarial Purification with Score-based Generative Models
Jongmin Yoon, Sung Ju Hwang, Juho Lee
Poster
Thu 9:00 Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling
Ozan Özdenizci, Robert Legenstein
Poster
Thu 9:00 Defense against backdoor attacks via robust covariance estimation
Jonathan Hayase, Weihao Kong, Raghav Somani, Sewoong Oh
Poster
Thu 9:00 Query Complexity of Adversarial Attacks
Grzegorz Gluch, Rüdiger Urbanke
Poster
Thu 9:00 Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks
Xin Zhao, Zeru Zhang, Zijie Zhang, Lingfei Wu, Jiayin Jin, Yang Zhou, Ruoming Jin, Dejing Dou, Da Yan
Poster
Thu 9:00 Improved, Deterministic Smoothing for L_1 Certified Robustness
Alexander Levine, Soheil Feizi
Poster
Thu 9:00 Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Arjun Nitin Bhagoji, Daniel Cullina, Vikash Sehwag, Prateek Mittal
Poster
Thu 9:00 Mixed Nash Equilibria in the Adversarial Examples Game
Laurent Meunier, Meyer Scetbon, Rafael Pinot, Jamal Atif, Yann Chevaleyre
Poster
Thu 9:00 To be Robust or to be Fair: Towards Fairness in Adversarial Training
Han Xu, Xiaorui Liu, Yaxin Li, Anil Jain, Jiliang Tang
Poster
Thu 9:00 CARTL: Cooperative Adversarially-Robust Transfer Learning
Dian Chen, Hongxin Hu, Qian Wang, Li Yinli, Cong Wang, Chao Shen, Qi Li
Poster
Thu 9:00 Skew Orthogonal Convolutions
Sahil Singla, Soheil Feizi
Poster
Thu 9:00 Learning to Generate Noise for Multi-Attack Robustness
Divyam Madaan, Jinwoo Shin, Sung Ju Hwang
Poster
Thu 9:00 Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks
Nezihe Merve Gürel, Xiangyu Qi, Luka Rimanic, Ce Zhang, Bo Li
Oral
Thu 17:00 Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm
Mingkang Zhu, Tianlong Chen, Zhangyang Wang
Spotlight
Thu 17:20 Maximum Mean Discrepancy Test is Aware of Adversarial Attacks
Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama
Spotlight
Thu 17:25 Learning Diverse-Structured Networks for Adversarial Robustness
Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama
Spotlight
Thu 17:30 PopSkipJump: Decision-Based Attack for Probabilistic Classifiers
Carl-Johann Simon-Gabriel, Noman Ahmed Sheikh, Andreas Krause
Spotlight
Thu 17:35 Towards Better Robust Generalization with Shift Consistency Regularization
Shufei Zhang, Zhuang Qian, Kaizhu Huang, Qiufeng Wang, Rui Zhang, Xinping Yi
Spotlight
Thu 17:40 Robust Learning for Data Poisoning Attacks
Yunjuan Wang, Poorya Mianjy, Raman Arora
Spotlight
Thu 17:45 A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale Black-Box Optimization
HanQin Cai, Yuchen Lou, Daniel Mckenzie, Wotao Yin
Spotlight
Thu 17:45 Mind the Box: $l_1$-APGD for Sparse Adversarial Attacks on Image Classifiers
Francesco Croce, Matthias Hein
Spotlight
Thu 18:35 Integrated Defense for Resilient Graph Matching
Jiaxiang Ren, Zijie Zhang, Jiayin Jin, Xin Zhao, Sixing Wu, Yang Zhou, Yelong Shen, Tianshi Che, Ruoming Jin, Dejing Dou
Oral
Thu 19:00 A General Framework For Detecting Anomalous Inputs to DNN Classifiers
Jayaram Raghuram, Varun Chandrasekaran, Somesh Jha, Suman Banerjee
Spotlight
Thu 19:05 Neural Tangent Generalization Attacks
Jimmy Yuan, Shan-Hung (Brandon) Wu
Spotlight
Thu 19:20 Towards Defending against Adversarial Examples via Attack-Invariant Features
Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao
Spotlight
Thu 19:25 Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons
Bohang Zhang, Tianle Cai, Zhou Lu, Di He, Liwei Wang
Spotlight
Thu 19:30 Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability
Kaizhao Liang, Jacky Zhang, Boxin Wang, Zhuolin Yang, Sanmi Koyejo, Bo Li
Spotlight
Thu 19:35 Improving Gradient Regularization using Complex-Valued Neural Networks
Eric Yeats, Yiran Chen, Hai Li
Spotlight
Thu 19:40 Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference
Yonggan Fu, Qixuan Yu, Meng Li, Vikas Chandra, Yingyan Lin
Spotlight
Thu 19:45 Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation
Jiawei Zhang, Linyi Li, Huichen Li, Xiaolu Zhang, Shuang Yang, Bo Li
Poster
Thu 21:00 Improving Gradient Regularization using Complex-Valued Neural Networks
Eric Yeats, Yiran Chen, Hai Li
Poster
Thu 21:00 Mind the Box: $l_1$-APGD for Sparse Adversarial Attacks on Image Classifiers
Francesco Croce, Matthias Hein
Poster
Thu 21:00 Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons
Bohang Zhang, Tianle Cai, Zhou Lu, Di He, Liwei Wang
Poster
Thu 21:00 Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference
Yonggan Fu, Qixuan Yu, Meng Li, Vikas Chandra, Yingyan Lin
Poster
Thu 21:00 Integrated Defense for Resilient Graph Matching
Jiaxiang Ren, Zijie Zhang, Jiayin Jin, Xin Zhao, Sixing Wu, Yang Zhou, Yelong Shen, Tianshi Che, Ruoming Jin, Dejing Dou
Poster
Thu 21:00 Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm
Mingkang Zhu, Tianlong Chen, Zhangyang Wang
Poster
Thu 21:00 PopSkipJump: Decision-Based Attack for Probabilistic Classifiers
Carl-Johann Simon-Gabriel, Noman Ahmed Sheikh, Andreas Krause
Poster
Thu 21:00 A General Framework For Detecting Anomalous Inputs to DNN Classifiers
Jayaram Raghuram, Varun Chandrasekaran, Somesh Jha, Suman Banerjee
Poster
Thu 21:00 Robust Learning for Data Poisoning Attacks
Yunjuan Wang, Poorya Mianjy, Raman Arora
Poster
Thu 21:00 A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale Black-Box Optimization
HanQin Cai, Yuchen Lou, Daniel Mckenzie, Wotao Yin
Poster
Thu 21:00 Neural Tangent Generalization Attacks
Jimmy Yuan, Shan-Hung (Brandon) Wu
Poster
Thu 21:00 Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation
Jiawei Zhang, Linyi Li, Huichen Li, Xiaolu Zhang, Shuang Yang, Bo Li
Poster
Thu 21:00 Learning Diverse-Structured Networks for Adversarial Robustness
Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama
Poster
Thu 21:00 Maximum Mean Discrepancy Test is Aware of Adversarial Attacks
Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama
Poster
Thu 21:00 Towards Better Robust Generalization with Shift Consistency Regularization
Shufei Zhang, Zhuang Qian, Kaizhu Huang, Qiufeng Wang, Rui Zhang, Xinping Yi
Poster
Thu 21:00 Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability
Kaizhao Liang, Jacky Zhang, Boxin Wang, Zhuolin Yang, Sanmi Koyejo, Bo Li
Poster
Thu 21:00 Towards Defending against Adversarial Examples via Attack-Invariant Features
Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao
Workshop
Fri 9:30 Contributed Talk: Automated Discovery of Adaptive Attacks on Adversarial Defenses
Chengyuan Yao
Workshop
Sat 7:15 Contributed Talk #4
Florian Tramer
Workshop
Sat 9:05 Adversarial Examples in Random Deep Networks
Peter Bartlett
Workshop
Sat 11:10 Understanding the effect of sparsity on neural networks robustness
Lukas Timpl, Rahim Entezari, Hanie Sedghi, Behnam Neyshabur, Olga Saukh
Workshop
Sat 11:10 Invited Talk #9
Kamalika Chaudhuri
Workshop
Sat 11:50 Invited Talk #10
Cihang Xie
Workshop
On the (Un-)Avoidability of Adversarial Examples
icml2021xai, Ruth Urner
Workshop
Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples
Nelson Manohar-Alers, Ryan Feng, Sahib Singh, Jiguo Song, Atul Prakash
Workshop
Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization
Jie Wang, Zhaoxia Yin, Jing Jiang, Yang Du
Workshop
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them
Florian Tramer
Workshop
AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense
Duhun Hwang, Eunjung Lee, Wonjong Rhee
Workshop
Is It Time to Redefine the Classification Task for Deep Learning Systems?
Keji Han, Yun Li, Songcan Chen
Workshop
Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks
Xiao Yang, Yinpeng Dong, Tianyu Pang
Workshop
Whispering to DNN: A Speech Steganographic Scheme Based on Hidden Adversarial Examples for Speech Recognition Models
Haozhe Chen, Weiming Zhang, Kejiang Chen, Nenghai Yu
Workshop
Improving Visual Quality of Unrestricted Adversarial Examples with Wavelet-VAE
Wenzhao Xiang, Chang Liu, Shibao Zheng
Workshop
Generate More Imperceptible Adversarial Examples for Object Detection
Siyuan Liang, Xingxing Wei, Xiaochun Cao
Workshop
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
Maura Pintor, Luca Demetrio, Angelo Sotgiu, Giovanni Manca, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli
Workshop
Generalizing Adversarial Training to Composite Semantic Perturbations
Yun-Yun Tsai, Lei Hsiung, Pin-Yu Chen, Tsung-Yi Ho
Workshop
Adversarial Semantic Contour for Object Detection
Yichi Zhang, Zijian Zhu, Xiao Yang, Jun Zhu
Workshop
Detecting AutoAttack Perturbations in the Frequency Domain
Peter Lorenz, Paula Harder, Dominik Straßel, Margret Keuper, Janis Keuper
Workshop
Towards Transferable Adversarial Perturbations with Minimum Norm
Fangcheng Liu, Chao Zhang, Hongyang Zhang
Workshop
Uncovering Universal Features: How Adversarial Training Improves Adversarial Transferability
Jacob M Springer, Melanie Mitchell, Garrett T Kenyon
Workshop
A Primer on Multi-Neuron Relaxation-based Adversarial Robustness Certification
Kevin Roth
Workshop
Robust Recovery of Adversarial Samples
Tejas Bana, Siddhant Kulkarni, Jatan Loya
Workshop
On the Connections between Counterfactual Explanations and Adversarial Examples
Martin Pawelczyk, Shalmali Joshi, Chirag Agarwal, Sohini Upadhyay, Hima Lakkaraju
Workshop
Benign Overfitting in Adversarially Robust Linear Classification
Jinghui Chen, Yuan Cao, Yuan Cao, Quanquan Gu
Workshop
Classification and Adversarial Examples in an Overparameterized Linear Model: A Signal-Processing Perspective
Adhyyan Narang, Vidya Muthukumar, Anant Sahai
Workshop
On the Connections between Counterfactual Explanations and Adversarial Examples
icml2021xai, Martin Pawelczyk, Shalmali Joshi, Chirag Agarwal, Sohini Upadhyay, Hima Lakkaraju
Workshop
Understanding the effect of sparsity on neural networks robustness
Lukas Timpl, Rahim Entezari, Hanie Sedghi, Behnam Neyshabur, Olga Saukh
Workshop
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Chengyuan Yao, Pavol Bielik, Petar Tsankov, Martin Vechev
Workshop
Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout
icml2021xai, Pengfei Xie