Geometry-Guided Generative Representation for Functional Brain Graphs
Abstract
In network neuroscience, functional brain systems are often characterized using separate yet related graph-theoretic or spectral descriptors, overlooking how these properties covary and partially overlap across individuals and conditions. We anticipate that dense, weighted functional connectivity graphs lie on a low-dimensional latent geometry along which both topological and spectral structures vary smoothly at the population level. Although graph-based deep learning offers a powerful framework for modeling these brain connectomes, supervised approaches are constrained by the limited availability of labeled data. Existing unsupervised graph representation methods also typically focus on node-level embeddings, which are limited in capturing compact graph-level representations that preserve information from dense functional connectomes. To address these gaps, we learn compact brain graph representations using a graph transformer autoencoder, where domain-specific, aligned functional gradient geometry provides an inductive bias to guide learning. Despite being trained in a fully unsupervised manner, our approach meaningfully separates cognitive states and enables decoding of visual stimuli, with performance further improved by incorporating neural dynamics. In parallel, to enable generation of synthetic brain graphs, we fit a diffusion model to the learned latent representation and decode samples back to dense connectomes.