Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Pretrained Language Models to Solve Graph Tasks in Natural Language

Frederik Wenkel · Guy Wolf · Boris Knyazev

Keywords: [ Graph Neural Networks ] [ large language models ]


Abstract:

Pretrained large language models (LLMs) are powerful learners in a variety of language tasks. We explore if LLMs can learn from graph-structured data when the graphs are described using natural language. We explore data augmentation and pretraining specific to the graph domain and show that LLMs such as GPT-2 and GPT-3 are promising alternatives to graph neural networks.

Chat is not available.