Timezone: »

 
Certifiably Robust Multi-Agent Reinforcement Learning against Adversarial Communication
Yanchao Sun · Ruijie Zheng · Parisa Hassanzadeh · Yongyuan Liang · Soheil Feizi · Sumitra Ganesh · Furong Huang
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions. However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored. Specifically, if communication messages are manipulated by malicious attackers, agents relying on untrustworthy communication may take unsafe actions that lead to catastrophic consequences. Therefore, it is crucial to ensure that agents will not be misled by corrupted communication, while still benefiting from benign communication. In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $C<\frac{N-1}{2}$ agents to a victim agent. For this strong threat model, we propose a certifiable defense by constructing a message-ensemble policy that aggregates multiple randomly ablated message sets. Theoretical analysis shows that this message-ensemble policy can utilize benign communication while being certifiably robust to adversarial communication, regardless of the attacking algorithm. Experiments in multiple environments verify that our defense significantly improves the robustness of trained policies against various types of attacks.

Author Information

Yanchao Sun (University of Maryland, College Park)
Ruijie Zheng (University of Maryland, College Park)
Parisa Hassanzadeh (JPMorgan AI Research)
Yongyuan Liang (Sun Yat-sen University)
Soheil Feizi (University of Maryland)
Sumitra Ganesh
Furong Huang (University of Maryland)
Furong Huang

Furong Huang is an Assistant Professor of the Department of Computer Science at University of Maryland. She works on statistical and trustworthy machine learning, reinforcement learning, graph neural networks, deep learning theory and federated learning with specialization in domain adaptation, algorithmic robustness and fairness. Furong is a recipient of the MIT Technology Review Innovators Under 35 Asia Pacific Award, the MLconf Industry Impact Research Award, the NSF CRII Award, the Adobe Faculty Research Award, three JP Morgan Faculty Research Awards and finalist of AI in Research - AI researcher of the year for Women in AI Awards North America. She received her Ph.D. in electrical engineering and computer science from UC Irvine in 2016, after which she spent one year as a postdoctoral researcher at Microsoft Research NYC.

More from the Same Authors