Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Jun 15 08:30 AM -- 06:00 PM (PDT) @ Grand Ballroom B
Learning and Reasoning with Graph-Structured Representations
Ethan Fetaya · Zhiting Hu · Thomas Kipf · Yujia Li · Xiaodan Liang · Renjie Liao · Raquel Urtasun · Hao Wang · Max Welling · Eric Xing · Richard Zemel





Workshop Home Page

Graph-structured representations are widely used as a natural and powerful way to encode information such as relations between objects or entities, interactions between online users (e.g., in social networks), 3D meshes in computer graphics, multi-agent environments, as well as molecular structures, to name a few. Learning and reasoning with graph-structured representations is gaining increasing interest in both academia and industry, due to its fundamental advantages over more traditional unstructured methods in supporting interpretability, causality, transferability, etc. Recently, there is a surge of new techniques in the context of deep learning, such as graph neural networks, for learning graph representations and performing reasoning and prediction, which have achieved impressive progress. However, it can still be a long way to go to obtain satisfactory results in long-range multi-step reasoning, scalable learning with very large graphs, flexible modeling of graphs in combination with other dimensions such as temporal variation and other modalities such as language and vision. New advances in theoretical foundations, models and algorithms, as well as empirical discoveries and applications are therefore all highly desirable.

The aims of this workshop are to bring together researchers to dive deeply into some of the most promising methods which are under active exploration today, discuss how we can design new and better benchmarks, identify impactful application domains, encourage discussion and foster collaboration. The workshop will feature speakers, panelists, and poster presenters from machine perception, natural language processing, multi-agent behavior and communication, meta-learning, planning, and reinforcement learning, covering approaches which include (but are not limited to):

-Deep learning methods on graphs/manifolds/relational data (e.g., graph neural networks)
-Deep generative models of graphs (e.g., for drug design)
-Unsupervised graph/manifold/relational embedding methods (e.g., hyperbolic embeddings)
-Optimization methods for graphs/manifolds/relational data
-Relational or object-level reasoning in machine perception
-Relational/structured inductive biases for reinforcement learning, modeling multi-agent behavior and communication
-Neural-symbolic integration
-Theoretical analysis of capacity/generalization of deep learning models for graphs/manifolds/ relational data
-Benchmark datasets and evaluation metrics

Opening remarks (NA)
William L. Hamilton, McGill University (Invited talk)
Evolutionary Representation Learning for Dynamic Graphs; Aynaz Taheri and Tanya Berger-Wolf (Contributed talk)
Poster spotlights #1 (Spotlight talks)
Morning poster session and coffee break (Posters)
Marwin Segler, Benevolent AI (Invited talk)
Yaron Lipman, Weizmann Institute of Science (Invited talk)
PAN: Path Integral Based Convolution for Deep Graph Neural Networks; Zheng Ma, Ming Li and Yu Guang Wang (Contributed talk)
Poster spotlights #2 (Spotlight talks)
Lunch break (Break)
Alex Polozov, Microsoft Research (Invited talk)
Sanja Fidler, University of Toronto (Invited talk)
On Graph Classification Networks, Datasets and Baselines; Enxhell Luzhnica, Ben Day and Pietro LiĆ³ (Contributed talk)
Poster spotlights #3 (Spotlight talks)
Afternoon poster session and coffee break (Posters)
Caroline Uhler, MIT (Invited talk)
Alexander Schwing, University of Illinois at Urbana-Champaign (Invited talk)