Timezone: »
Certifiably Robust Multi-Agent Reinforcement Learning against Adversarial Communication
Yanchao Sun · Ruijie Zheng · Parisa Hassanzadeh · Yongyuan Liang · Soheil Feizi · Sumitra Ganesh · Furong Huang
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions. However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored. Specifically, if communication messages are manipulated by malicious attackers, agents relying on untrustworthy communication may take unsafe actions that lead to catastrophic consequences. Therefore, it is crucial to ensure that agents will not be misled by corrupted communication, while still benefiting from benign communication. In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $C<\frac{N-1}{2}$ agents to a victim agent. For this strong threat model, we propose a certifiable defense by constructing a message-ensemble policy that aggregates multiple randomly ablated message sets. Theoretical analysis shows that this message-ensemble policy can utilize benign communication while being certifiably robust to adversarial communication, regardless of the attacking algorithm. Experiments in multiple environments verify that our defense significantly improves the robustness of trained policies against various types of attacks.
Author Information
Yanchao Sun (University of Maryland, College Park)
Ruijie Zheng (University of Maryland, College Park)
Parisa Hassanzadeh (JPMorgan AI Research)
Yongyuan Liang (Sun Yat-sen University)
Soheil Feizi (University of Maryland)
Sumitra Ganesh
Furong Huang (University of Maryland)
More from the Same Authors
-
2022 : Generative Models with Information-Theoretic Protection Against Membership Inference Attacks »
Parisa Hassanzadeh · Robert Tillman -
2022 : Everyone Matters: Customizing the Dynamics of Decision Boundary for Adversarial Robustness »
Yuancheng Xu · Yanchao Sun · Furong Huang -
2022 : Towards Better Understanding of Self-Supervised Representations »
Neha Mukund Kalibhat · Kanika Narang · Hamed Firooz · Maziar Sanjabi · Soheil Feizi -
2022 : Live in the Moment: Learning Dynamics Model Adapted to Evolving Policy »
xiyao wang · Wichayaporn Wongkamjan · Furong Huang -
2022 : Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 : Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2022 : Panel discussion »
Steffen Schneider · Aleksander Madry · Alexei Efros · Chelsea Finn · Soheil Feizi -
2022 : Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 : Toward Efficient Robust Training against Union of Lp Threat Models »
Gaurang Sriramanan · Maharshi Gor · Soheil Feizi -
2022 Poster: Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 Poster: Scaling-up Diverse Orthogonal Convolutional Networks by a Paraunitary Framework »
Jiahao Su · Wonmin Byeon · Furong Huang -
2022 Poster: FOCUS: Familiar Objects in Common and Uncommon Settings »
Priyatham Kattakinda · Soheil Feizi -
2022 Spotlight: Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 Spotlight: FOCUS: Familiar Objects in Common and Uncommon Settings »
Priyatham Kattakinda · Soheil Feizi -
2022 Spotlight: Scaling-up Diverse Orthogonal Convolutional Networks by a Paraunitary Framework »
Jiahao Su · Wonmin Byeon · Furong Huang -
2021 : Invited Talk 6: T​owards Understanding Foundations of Robust Learning »
Soheil Feizi -
2021 Poster: Improved, Deterministic Smoothing for L_1 Certified Robustness »
Alexander Levine · Soheil Feizi -
2021 Poster: Skew Orthogonal Convolutions »
Sahil Singla · Soheil Feizi -
2021 Spotlight: Skew Orthogonal Convolutions »
Sahil Singla · Soheil Feizi -
2021 Oral: Improved, Deterministic Smoothing for L_1 Certified Robustness »
Alexander Levine · Soheil Feizi -
2020 Poster: Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness »
Aounon Kumar · Alexander Levine · Tom Goldstein · Soheil Feizi -
2020 Poster: Second-Order Provable Defenses against Adversarial Attacks »
Sahil Singla · Soheil Feizi -
2020 Poster: On Second-Order Group Influence Functions for Black-Box Predictions »
Samyadeep Basu · Xuchen You · Soheil Feizi -
2019 Poster: Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation »
Sahil Singla · Eric Wallace · Shi Feng · Soheil Feizi -
2019 Oral: Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation »
Sahil Singla · Eric Wallace · Shi Feng · Soheil Feizi -
2019 Poster: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2019 Oral: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi