Timezone: »
Recent findings have shown multiple graph learning models, such as graph classification and graph matching, are highly vulnerable to adversarial attacks, i.e. small input perturbations in graph structures and node attributes can cause the model failures. Existing defense techniques often defend specific attacks on particular multiple graph learning tasks. This paper proposes an attack-agnostic graph-adaptive 1-Lipschitz neural network, ERNN, for improving the robustness of deep multiple graph learning while achieving remarkable expressive power. A Kl-Lipschitz Weibull activation function is designed to enforce the gradient norm as Kl at layer l. The nearest matrix orthogonalization and polar decomposition techniques are utilized to constraint the weight norm as 1/Kl and make the norm-constrained weight close to the original weight. The theoretical analysis is conducted to derive lower and upper bounds of feasible Kl under the 1-Lipschitz constraint. The combination of norm-constrained weight and activation function leads to the 1-Lipschitz neural network for expressive and robust multiple graph learning.
Author Information
Xin Zhao (Auburn University)
Zeru Zhang (Auburn University)
Zijie Zhang (Auburn University)
Lingfei Wu (IBM Research AI)
Jiayin Jin (Auburn University)
Yang Zhou (Auburn University)
Ruoming Jin (Kent State University)
Dejing Dou (" University of Oregon, USA")
Da Yan (University of Alabama at Birmingham)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks »
Thu. Jul 22nd 04:00 -- 06:00 PM Room
More from the Same Authors
-
2022 Poster: Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile »
Dong Chen · Lingfei Wu · Siliang Tang · Xiao Yun · Bo Long · Yueting Zhuang -
2022 Poster: Input-agnostic Certified Group Fairness via Gaussian Parameter Smoothing »
Jiayin Jin · Zeru Zhang · Yang Zhou · Lingfei Wu -
2022 Spotlight: Input-agnostic Certified Group Fairness via Gaussian Parameter Smoothing »
Jiayin Jin · Zeru Zhang · Yang Zhou · Lingfei Wu -
2022 Spotlight: Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile »
Dong Chen · Lingfei Wu · Siliang Tang · Xiao Yun · Bo Long · Yueting Zhuang -
2022 Poster: Accelerated Federated Learning with Decoupled Adaptive Optimization »
Jiayin Jin · Jiaxiang Ren · Yang Zhou · Lingjuan Lyu · Ji Liu · Dejing Dou -
2022 Spotlight: Accelerated Federated Learning with Decoupled Adaptive Optimization »
Jiayin Jin · Jiaxiang Ren · Yang Zhou · Lingjuan Lyu · Ji Liu · Dejing Dou -
2021 : Invited Talk 8: Deep Learning on Graphs for Natural Language Processing »
Lingfei Wu -
2021 Poster: Integrated Defense for Resilient Graph Matching »
Jiaxiang Ren · Zijie Zhang · Jiayin Jin · Xin Zhao · Sixing Wu · Yang Zhou · Yelong Shen · Tianshi Che · Ruoming Jin · Dejing Dou -
2021 Spotlight: Integrated Defense for Resilient Graph Matching »
Jiaxiang Ren · Zijie Zhang · Jiayin Jin · Xin Zhao · Sixing Wu · Yang Zhou · Yelong Shen · Tianshi Che · Ruoming Jin · Dejing Dou -
2020 Poster: Scalable Differential Privacy with Certified Robustness in Adversarial Learning »
Hai Phan · My T. Thai · Han Hu · Ruoming Jin · Tong Sun · Dejing Dou