Expo Talk Panel
Hall A8

This presentation will cover a variety of work at the intersection of graph representation learning and AI being done at Google. It will provide a general overview of graph neural networks & LLMs and then go into 3 areas that we think will be of interest to a general ML audience, including:

Encoding of Graphs as Text for GenAI models [1]. This will cover insights on how best to encode structured data, such as graphs, for LLMs and other GenAI models. Will cover results Graph Structure Learning [2,3]. Will cover work on learning the best graph structure for a given dataset. Graph Foundation Models [4,5]. Will cover more complex models, such as structure encoding functions, which can learn the best representation of data for LLMs Theoretical Connections between GNNs and Transformer [6]. Will briefly cover our results on the complexity of graph algorithms in Transformer architecture & the insights derived from this.

References:

[1] Talk Like a Graph: Encoding Graphs for Large Language Models https://arxiv.org/pdf/2310.04560.pdf

[2] Grale: Designing networks for graph learning https://arxiv.org/pdf/2007.12002.pdf

[3] UGSL: A unified framework for benchmarking graph structure learning https://arxiv.org/pdf/2308.10737.pdf

[4] Let Your Graph Do the Talking: Encoding Structured Data for LLMs https://arxiv.org/pdf/2402.05862.pdf

[5] Don't Forget to Connect! Improving RAG with Graph-based Reranking https://arxiv.org/abs/2405.18414

[6] Understanding Transformer Reasoning Capabilities via Graph Algorithms https://arxiv.org/abs/2405.18512

Chat is not available.