Timezone: »
Due to the over-smoothing issue, most existing graph neural networks can only capture limited dependencies with their inherently finite aggregation layers. To overcome this limitation, we propose a new kind of graph convolution, called Graph Implicit Nonlinear Diffusion (GIND), which implicitly has access to infinite hops of neighbors while adaptively aggregating features with nonlinear diffusion to prevent over-smoothing. Notably, we show that the learned representation can be formalized as the minimizer of an explicit convex optimization objective. With this property, we can theoretically characterize the equilibrium of our GIND from an optimization perspective. More interestingly, we can induce new structural variants by modifying the corresponding optimization objective. To be specific, we can embed prior properties to the equilibrium, as well as introducing skip connections to promote training stability. Extensive experiments show that GIND is good at capturing long-range dependencies, and performs well on both homophilic and heterophilic graphs with nonlinear diffusion. Moreover, we show that the optimization-induced variants of our models can boost the performance and improve training stability and efficiency as well. As a result, our GIND obtains significant improvements on both node-level and graph-level tasks.
Author Information
Qi Chen (Peking University)
Yifei Wang (Peking University)
Yisen Wang (Peking University)
Jiansheng Yang (Peking University)
Zhouchen Lin (Peking University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Tue. Jul 19th through Wed the 20th Room Hall E #426
More from the Same Authors
-
2021 : Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions »
Nodens Koren · Xingjun Ma · Qiuhong Ke · Yisen Wang · James Bailey -
2021 : Demystifying Adversarial Training via A Unified Probabilistic Framework »
Yisen Wang · Jiansheng Yang · Zhouchen Lin · Yifei Wang -
2023 Poster: Rethinking Weak Supervision in Helping Contrastive Representation Learning »
Jingyi Cui · Weiran Huang · Yifei Wang · Yisen Wang -
2023 Poster: On the Generalization of Multi-modal Contrastive Learning »
Qi Zhang · Yifei Wang · Yisen Wang -
2022 Poster: PDO-s3DCNNs: Partial Differential Operator Based Steerable 3D CNNs »
Zhengyang Shen · Tao Hong · Qi She · Jinwen Ma · Zhouchen Lin -
2022 Spotlight: PDO-s3DCNNs: Partial Differential Operator Based Steerable 3D CNNs »
Zhengyang Shen · Tao Hong · Qi She · Jinwen Ma · Zhouchen Lin -
2022 Poster: Certified Adversarial Robustness Under the Bounded Support Set »
Yiwen Kou · Qinyuan Zheng · Yisen Wang -
2022 Poster: Kill a Bird with Two Stones: Closing the Convergence Gaps in Non-Strongly Convex Optimization by Directly Accelerated SVRG with Double Compensation and Snapshots »
Yuanyuan Liu · Fanhua Shang · Weixin An · Hongying Liu · Zhouchen Lin -
2022 Spotlight: Certified Adversarial Robustness Under the Bounded Support Set »
Yiwen Kou · Qinyuan Zheng · Yisen Wang -
2022 Spotlight: Kill a Bird with Two Stones: Closing the Convergence Gaps in Non-Strongly Convex Optimization by Directly Accelerated SVRG with Double Compensation and Snapshots »
Yuanyuan Liu · Fanhua Shang · Weixin An · Hongying Liu · Zhouchen Lin -
2022 Poster: Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(\epsilon^{-7/4})$ Complexity »
Huan Li · Zhouchen Lin -
2022 Poster: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Poster: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2022 Spotlight: Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(\epsilon^{-7/4})$ Complexity »
Huan Li · Zhouchen Lin -
2022 Spotlight: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Spotlight: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2021 : Discussion Panel #1 »
Hang Su · Matthias Hein · Liwei Wang · Sven Gowal · Jan Hendrik Metzen · Henry Liu · Yisen Wang -
2021 Poster: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Poster: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Spotlight: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Oral: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Poster: Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization? »
Dinghuai Zhang · Kartik Ahuja · Yilun Xu · Yisen Wang · Aaron Courville -
2021 Oral: Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization? »
Dinghuai Zhang · Kartik Ahuja · Yilun Xu · Yisen Wang · Aaron Courville -
2021 Poster: Uncertainty Principles of Encoding GANs »
Ruili Feng · Zhouchen Lin · Jiapeng Zhu · Deli Zhao · Jingren Zhou · Zheng-Jun Zha -
2021 Spotlight: Uncertainty Principles of Encoding GANs »
Ruili Feng · Zhouchen Lin · Jiapeng Zhu · Deli Zhao · Jingren Zhou · Zheng-Jun Zha -
2020 Poster: PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions »
Zhengyang Shen · Lingshen He · Zhouchen Lin · Jinwen Ma -
2020 Poster: Boosted Histogram Transform for Regression »
Yuchao Cai · Hanyuan Hang · Hanfang Yang · Zhouchen Lin -
2020 Poster: Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability »
Mingjie Li · Lingshen He · Zhouchen Lin -
2020 Poster: Maximum-and-Concatenation Networks »
Xingyu Xie · Hao Kong · Jianlong Wu · Wayne Zhang · Guangcan Liu · Zhouchen Lin -
2019 Poster: Differentiable Linearized ADMM »
Xingyu Xie · Jianlong Wu · Guangcan Liu · Zhisheng Zhong · Zhouchen Lin -
2019 Oral: Differentiable Linearized ADMM »
Xingyu Xie · Jianlong Wu · Guangcan Liu · Zhisheng Zhong · Zhouchen Lin