Timezone: »
Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of "neighboring" nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.
Author Information
Keyulu Xu (MIT)
Chengtao Li (MIT)
Yonglong Tian (MIT)
Tomohiro Sonobe (National Institute of Informatics)
Ken-ichi Kawarabayashi (National Institute of Informatics)
Stefanie Jegelka (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Representation Learning on Graphs with Jumping Knowledge Networks »
Wed. Jul 11th 02:20 -- 02:40 PM Room A5
More from the Same Authors
-
2021 Poster: Information Obfuscation of Graph Neural Networks »
Peiyuan Liao · Han Zhao · Keyulu Xu · Tommi Jaakkola · Geoff Gordon · Stefanie Jegelka · Ruslan Salakhutdinov -
2021 Poster: Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth »
Keyulu Xu · Mozhi Zhang · Stefanie Jegelka · Kenji Kawaguchi -
2021 Poster: GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training »
Tianle Cai · Shengjie Luo · Keyulu Xu · Di He · Tie-Yan Liu · Liwei Wang -
2021 Spotlight: GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training »
Tianle Cai · Shengjie Luo · Keyulu Xu · Di He · Tie-Yan Liu · Liwei Wang -
2021 Spotlight: Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth »
Keyulu Xu · Mozhi Zhang · Stefanie Jegelka · Kenji Kawaguchi -
2021 Spotlight: Information Obfuscation of Graph Neural Networks »
Peiyuan Liao · Han Zhao · Keyulu Xu · Tommi Jaakkola · Geoff Gordon · Stefanie Jegelka · Ruslan Salakhutdinov -
2021 Town Hall: Town Hall »
John Langford · Marina Meila · Tong Zhang · Le Song · Stefanie Jegelka · Csaba Szepesvari -
2020 : Negative Dependence and Sampling »
Stefanie Jegelka -
2020 Poster: Strength from Weakness: Fast Learning Using Weak Supervision »
Joshua Robinson · Stefanie Jegelka · Suvrit Sra -
2019 Poster: Learning Generative Models across Incomparable Spaces »
Charlotte Bunne · David Alvarez-Melis · Andreas Krause · Stefanie Jegelka -
2019 Oral: Learning Generative Models across Incomparable Spaces »
Charlotte Bunne · David Alvarez-Melis · Andreas Krause · Stefanie Jegelka -
2018 Poster: Causal Bandits with Propagating Inference »
Akihiro Yabe · Daisuke Hatano · Hanna Sumita · Shinji Ito · Naonori Kakimura · Takuro Fukunaga · Ken-ichi Kawarabayashi -
2018 Oral: Causal Bandits with Propagating Inference »
Akihiro Yabe · Daisuke Hatano · Hanna Sumita · Shinji Ito · Naonori Kakimura · Takuro Fukunaga · Ken-ichi Kawarabayashi -
2017 Poster: Max-value Entropy Search for Efficient Bayesian Optimization »
Zi Wang · Stefanie Jegelka -
2017 Poster: Robust Budget Allocation via Continuous Submodular Functions »
Matthew J Staib · Stefanie Jegelka -
2017 Talk: Robust Budget Allocation via Continuous Submodular Functions »
Matthew J Staib · Stefanie Jegelka -
2017 Talk: Max-value Entropy Search for Efficient Bayesian Optimization »
Zi Wang · Stefanie Jegelka -
2017 Poster: Batched High-dimensional Bayesian Optimization via Structural Kernel Learning »
Zi Wang · Chengtao Li · Stefanie Jegelka · Pushmeet Kohli -
2017 Talk: Batched High-dimensional Bayesian Optimization via Structural Kernel Learning »
Zi Wang · Chengtao Li · Stefanie Jegelka · Pushmeet Kohli