Timezone: »
A recent study has shown that graph matching models are vulnerable to adversarial manipulation of their input which is intended to cause a mismatching. Nevertheless, there is still a lack of a comprehensive solution for further enhancing the robustness of graph matching against adversarial attacks. In this paper, we identify and study two types of unique topology attacks in graph matching: inter-graph dispersion and intra-graph assembly attacks. We propose an integrated defense model, IDRGM, for resilient graph matching with two novel defense techniques to defend against the above two attacks simultaneously. A detection technique of inscribed simplexes in the hyperspheres consisting of multiple matched nodes is proposed to tackle inter-graph dispersion attacks, in which the distances among the matched nodes in multiple graphs are maximized to form regular simplexes. A node separation method based on phase-type distribution and maximum likelihood estimation is developed to estimate the distribution of perturbed graphs and separate the nodes within the same graphs over a wide space, for defending intra-graph assembly attacks, such that the interference from the similar neighbors of the perturbed nodes is significantly reduced. We evaluate the robustness of our IDRGM model on real datasets against state-of-the-art algorithms.
Author Information
Jiaxiang Ren (Auburn University)
Zijie Zhang (Auburn University)
Jiayin Jin (Auburn University)
Xin Zhao (Auburn University)
Sixing Wu (Peking University)
Yang Zhou (Auburn University)
Yelong Shen (Microsoft Dynamics 365 AI)
Tianshi Che (Auburn University)
Ruoming Jin (Kent State University)
Dejing Dou (" University of Oregon, USA")
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Integrated Defense for Resilient Graph Matching »
Fri. Jul 23rd 01:35 -- 01:40 AM Room
More from the Same Authors
-
2023 Poster: Fast Federated Machine Unlearning with Nonlinear Functional Theory »
Tianshi Che · Yang Zhou · Zijie Zhang · Lingjuan Lyu · Ji Liu · Da Yan · Dejing Dou · Jun Huan -
2023 Poster: Dimension-independent Certified Neural Network Watermarks via Mollifier Smoothing »
Jiaxiang Ren · Jiayin Jin · Yang Zhou · Lingjuan Lyu · Da Yan -
2022 Poster: Input-agnostic Certified Group Fairness via Gaussian Parameter Smoothing »
Jiayin Jin · Zeru Zhang · Yang Zhou · Lingfei Wu -
2022 Spotlight: Input-agnostic Certified Group Fairness via Gaussian Parameter Smoothing »
Jiayin Jin · Zeru Zhang · Yang Zhou · Lingfei Wu -
2022 Poster: Accelerated Federated Learning with Decoupled Adaptive Optimization »
Jiayin Jin · Jiaxiang Ren · Yang Zhou · Lingjuan Lyu · Ji Liu · Dejing Dou -
2022 Spotlight: Accelerated Federated Learning with Decoupled Adaptive Optimization »
Jiayin Jin · Jiaxiang Ren · Yang Zhou · Lingjuan Lyu · Ji Liu · Dejing Dou -
2021 Poster: Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks »
Xin Zhao · Zeru Zhang · Zijie Zhang · Lingfei Wu · Jiayin Jin · Yang Zhou · Ruoming Jin · Dejing Dou · Da Yan -
2021 Spotlight: Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks »
Xin Zhao · Zeru Zhang · Zijie Zhang · Lingfei Wu · Jiayin Jin · Yang Zhou · Ruoming Jin · Dejing Dou · Da Yan -
2020 Poster: Scalable Differential Privacy with Certified Robustness in Adversarial Learning »
Hai Phan · My T. Thai · Han Hu · Ruoming Jin · Tong Sun · Dejing Dou