Timezone: »
Graph neural networks (GNNs) are a popular class of machine learning models that have been successfully applied to a range of problems. Their major advantage lies in their ability to explicitly incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all.
With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph.
This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available.
We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.
Author Information
Luca Franceschi (Istituto Italiano di Tecnologia - University College London)
Mathias Niepert (NEC Laboratories Europe)
Massimiliano Pontil (Istituto Italiano di Tecnologia and University College London)
-
Xiao He (NEC Laboratories Europe)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Learning Discrete Structures for Graph Neural Networks »
Thu. Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom #177
More from the Same Authors
-
2022 Poster: Distribution Regression with Sliced Wasserstein Kernels »
Dimitri Marie Meunier · Massimiliano Pontil · Carlo Ciliberto -
2022 Spotlight: Distribution Regression with Sliced Wasserstein Kernels »
Dimitri Marie Meunier · Massimiliano Pontil · Carlo Ciliberto -
2021 Poster: Best Model Identification: A Rested Bandit Formulation »
Leonardo Cella · Massimiliano Pontil · Claudio Gentile -
2021 Spotlight: Best Model Identification: A Rested Bandit Formulation »
Leonardo Cella · Massimiliano Pontil · Claudio Gentile -
2020 Poster: On the Iteration Complexity of Hypergradient Computation »
Riccardo Grazzi · Luca Franceschi · Massimiliano Pontil · Saverio Salzo -
2020 Poster: Meta-learning with Stochastic Linear Bandits »
Leonardo Cella · Alessandro Lazaric · Massimiliano Pontil -
2019 Poster: State-Regularized Recurrent Neural Networks »
Cheng Wang · Mathias Niepert -
2019 Oral: State-Regularized Recurrent Neural Networks »
Cheng Wang · Mathias Niepert -
2019 Poster: Leveraging Low-Rank Relations Between Surrogate Tasks in Structured Prediction »
Giulia Luise · Dimitrios Stamos · Massimiliano Pontil · Carlo Ciliberto -
2019 Oral: Leveraging Low-Rank Relations Between Surrogate Tasks in Structured Prediction »
Giulia Luise · Dimitrios Stamos · Massimiliano Pontil · Carlo Ciliberto -
2018 Poster: Bilevel Programming for Hyperparameter Optimization and Meta-Learning »
Luca Franceschi · Paolo Frasconi · Saverio Salzo · Riccardo Grazzi · Massimiliano Pontil -
2018 Oral: Bilevel Programming for Hyperparameter Optimization and Meta-Learning »
Luca Franceschi · Paolo Frasconi · Saverio Salzo · Riccardo Grazzi · Massimiliano Pontil -
2017 Poster: Forward and Reverse Gradient-Based Hyperparameter Optimization »
Luca Franceschi · Michele Donini · Paolo Frasconi · Massimiliano Pontil -
2017 Talk: Forward and Reverse Gradient-Based Hyperparameter Optimization »
Luca Franceschi · Michele Donini · Paolo Frasconi · Massimiliano Pontil