ICML 2019
Skip to yearly menu bar Skip to main content


Workshop

Learning and Reasoning with Graph-Structured Representations

Ethan Fetaya · Zhiting Hu · Thomas Kipf · Yujia Li · Xiaodan Liang · Renjie Liao · Raquel Urtasun · Hao Wang · Max Welling · Eric Xing · Richard Zemel

Grand Ballroom B

Graph-structured representations are widely used as a natural and powerful way to encode information such as relations between objects or entities, interactions between online users (e.g., in social networks), 3D meshes in computer graphics, multi-agent environments, as well as molecular structures, to name a few. Learning and reasoning with graph-structured representations is gaining increasing interest in both academia and industry, due to its fundamental advantages over more traditional unstructured methods in supporting interpretability, causality, transferability, etc. Recently, there is a surge of new techniques in the context of deep learning, such as graph neural networks, for learning graph representations and performing reasoning and prediction, which have achieved impressive progress. However, it can still be a long way to go to obtain satisfactory results in long-range multi-step reasoning, scalable learning with very large graphs, flexible modeling of graphs in combination with other dimensions such as temporal variation and other modalities such as language and vision. New advances in theoretical foundations, models and algorithms, as well as empirical discoveries and applications are therefore all highly desirable.

The aims of this workshop are to bring together researchers to dive deeply into some of the most promising methods which are under active exploration today, discuss how we can design new and better benchmarks, identify impactful application domains, encourage discussion and foster collaboration. The workshop will feature speakers, panelists, and poster presenters from machine perception, natural language processing, multi-agent behavior and communication, meta-learning, planning, and reinforcement learning, covering approaches which include (but are not limited to):

-Deep learning methods on graphs/manifolds/relational data (e.g., graph neural networks)
-Deep generative models of graphs (e.g., for drug design)
-Unsupervised graph/manifold/relational embedding methods (e.g., hyperbolic embeddings)
-Optimization methods for graphs/manifolds/relational data
-Relational or object-level reasoning in machine perception
-Relational/structured inductive biases for reinforcement learning, modeling multi-agent behavior and communication
-Neural-symbolic integration
-Theoretical analysis of capacity/generalization of deep learning models for graphs/manifolds/ relational data
-Benchmark datasets and evaluation metrics

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content