Adversarial machine learning is a new gamut of technologies that aim to study the vulnerabilities of ML approaches and detect malicious behaviors in adversarial settings. The adversarial agents can deceive an ML classifier by significantly altering its response with imperceptible perturbations to the inputs. Although it is not to be alarmist, researchers in machine learning are responsible for preempting attacks and building safeguards, especially when the task is critical for information security and human lives. We need to deepen our understanding of machine learning in adversarial environments.
While the negative implications of this nascent technology have been widely discussed, researchers in machine learning are yet to explore their positive opportunities in numerous aspects. The positive impacts of adversarial machine learning are not limited to boost the robustness of ML models but cut across several other domains.
Since there are both positive and negative applications of adversarial machine learning, tackling adversarial learning to its use in the right direction requires a framework to embrace the positives. This workshop aims to bring together researchers and practitioners from various communities (e.g., machine learning, computer security, data privacy, and ethics) to synthesize promising ideas and research directions and foster and strengthen cross-community collaborations on both theoretical studies and practical applications. Different from the previous workshops on adversarial machine learning, our proposed workshop seeks to explore the prospects besides reducing the unintended risks for sophisticated ML models.
This is a one-day workshop, planned with a 10-minute opening, 11 invited keynotes, about 9 contributed talks, 2 poster sessions, and 2 special sessions for panel discussion about the prospects and perils of adversarial machine learning.
The workshop is kindly sponsored by RealAI Inc. and Bosch.
Opening Remarks (Demonstration) | |
Invited Talk #1 (Demonstration) | |
Invited Talk #2 (Demonstration) | |
Contributed Talk #1 (Demonstration) | |
Contributed Talk #2 (Demonstration) | |
Invited Talk #3 (Demonstration) | |
Invited Talk #4 (Demonstration) | |
Contributed Talk #3 (Demonstration) | |
Contributed Talk #4 (Demonstration) | |
Invited Talk #5 (Demonstration) | |
Discussion Panel #1 (Discussion Panel) | |
Poster Session #1 (Poster) | |
Invited Talk #6 (Demonstration) | |
Invited Talk #7 (Demonstration) | |
Contributed Talk #5 (Demonstration) | |
Contributed Talk #6 (Demonstration) | |
Invited Talk #8 (Demonstration) | |
Invited Talk #9 (Demonstration) | |
Contributed Talk #7 (Demonstration) | |
Contributed Talk #8 (Demonstration) | |
Invited Talk #10 (Demonstration) | |
Invited Talk #11 (Demonstration) | |
Discussion Panel #2 (Discussion Panel) | |
Contributed Talk #9 (Demonstration) | |
Poster Session #2 (Poster) | |
On Frank-Wolfe Adversarial Training (Poster) | |
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints (Poster) | |
Attacking Graph Classification via Bayesian Optimisation (Poster) | |
Enhancing Certified Robustness via Smoothed Weighted Ensembling (Poster) | |
Adversarial Robustness of Streaming Algorithms through Importance Sampling (Poster) | |
Generalizing Adversarial Training to Composite Semantic Perturbations (Poster) | |
Certified robustness against adversarial patch attacks via randomized cropping (Poster) | |
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples (Poster) | |
Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis (Poster) | |
Generate More Imperceptible Adversarial Examples for Object Detection (Poster) | |
Improving Visual Quality of Unrestricted Adversarial Examples with Wavelet-VAE (Poster) | |
Whispering to DNN: A Speech Steganographic Scheme Based on Hidden Adversarial Examples for Speech Recognition Models (Poster) | |
Strategically-timed State-Observation Attacks on Deep Reinforcement Learning Agents (Poster) | |
Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks (Poster) | |
Is It Time to Redefine the Classification Task for Deep Learning Systems? (Poster) | |
AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense (Poster) | |
Less is More: Feature Selection for Adversarial Robustness with Compressive Counter-Adversarial Attacks (Poster) | |
Empirical robustification of pre-trained classifiers (Poster) | |
Disrupting Model Training with Adversarial Shortcuts (Poster) | |
Limited Budget Adversarial Attack Against Online Image Stream (Poster) | |
The Interplay between Distribution Parameters and the Accuracy-Robustness Tradeoff in Classification (Poster) | |
Membership Inference Attacks on Lottery Ticket Networks (Poster) | |
Universal Adversarial Head: Practical Protection against Video Data Leakage (Poster) | |
Towards Safe Reinforcement Learning via Constraining Conditional Value at Risk (Poster) | |
Self-Supervised Iterative Contextual Smoothing for Efficient Adversarial Defense against Gray- and Black-Box Attack (Poster) | |
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks (Poster) | |
Demystifying Adversarial Training via A Unified Probabilistic Framework (Poster) | |
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them (Poster) | |
Data Poisoning Won’t Save You From Facial Recognition (Poster) | |
Long-term Cross Adversarial Training: A Robust Meta-learning Method for Few-shot Classification Tasks (Poster) | |
Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization (Poster) | |
Attacking Few-Shot Classifiers with Adversarial Support Poisoning (Poster) | |
Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks (Poster) | |
A Primer on Multi-Neuron Relaxation-based Adversarial Robustness Certification (Poster) | |
Adversarially Trained Neural Policies in the Fourier Domain (Poster) | |
Maximizing the robust margin provably overfits on noiseless data (Poster) | |
Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples (Poster) | |
Towards Achieving Adversarial Robustness Beyond Perceptual Limits (Poster) | |
Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions (Poster) | |
Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off (Poster) | |
On Success and Simplicity: A Second Look at Transferable Targeted Attacks (Poster) | |
Towards Transferable Adversarial Perturbations with Minimum Norm (Poster) | |
ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients (Poster) | |
Adversarial EXEmples: Functionality-preserving Optimization of Adversarial Windows Malware (Poster) | |
Non-Robust Feature Mapping in Deep Reinforcement Learning (Poster) | |
BadNL: Backdoor Attacks Against NLP Models (Poster) | |
Query-based Adversarial Attacks on Graph with Fake Nodes (Poster) | |
Robust Recovery of Adversarial Samples (Poster) | |
Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification (Poster) | |
Meta Adversarial Training against Universal Patches (Poster) | |
Adversarial Sample Detection via Channel Pruning (Poster) | |
Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations (Poster) | |
Uncovering Universal Features: How Adversarial Training Improves Adversarial Transferability (Poster) | |
Defending against Model Stealing via Verifying Embedded External Features (Poster) | |
A Closer Look at the Adversarial Robustness of Information Bottleneck Models (Poster) | |
Audio Injection Adversarial Example Attack (Poster) | |
Defending Adversaries Using Unsupervised Feature Clustering VAE (Poster) | |
Detecting AutoAttack Perturbations in the Frequency Domain (Poster) | |
On the Effectiveness of Poisoning against Unsupervised Domain Adaptation (Poster) | |
Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition (Poster) | |
Consistency Regularization for Adversarial Robustness (Poster) | |
Fast Certified Robust Training with Short Warmup (Poster) | |
Entropy Weighted Adversarial Training (Poster) | |
Adversarially Robust Learning via Entropic Regularization (Poster) | |
Poisoning the Search Space in Neural Architecture Search (Poster) | |
Adversarial Semantic Contour for Object Detection (Poster) | |
Hidden Patch Attacks for Optical Flow (Poster) | |
Combating Adversaries with Anti-Adversaries (Poster) | |
SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Adversarial Robustness (Poster) | |