Graph Foundation Models: A New Era for Graph Machine Learning
Abstract
Graph-structured data are ubiquitous across science and industry, yet today’s graph machine learning (GML) pipelines remain largely task- and dataset-specific, limiting robustness and transferability. This workshop brings together researchers and practitioners to advance graph foundation models (GFMs): models that pretrain once and adapt broadly across heterogeneous, temporal, and multimodal graphs. We will catalyze exchange on core questions spanning: architectural choices (GNNs, Transformers, and LLM-integrated pipelines), graph tokenization and structural encodings, pretraining objectives and scaling laws, and principled evaluation for cross-graph transfer. The scope covers diverse domains, including knowledge graphs, molecular and biological networks, relational databases, recommender systems, and social networks, emphasizing both methodological rigor and real-world impact. Through invited keynotes, contributed talks, posters, and panel discussions, the workshop aims to (i) consolidate design principles for GFMs, (ii) establish shared datasets, metrics, and reproducible protocols, and (iii) chart a community roadmap for scalable, transferable, and trustworthy graph learning.