Timezone: »
Graph transformers have emerged as a promising architecture for a variety of graph learning and representation tasks. Despite their successes, though, it remains challenging to scale graph transformers to large graphs while maintaining accuracy competitive with message-passing networks. In this paper, we introduce Exphormer, a framework for building powerful and scalable graph transformers. Exphormer consists of a sparse attention mechanism based on two mechanisms: virtual global nodes and expander graphs, whose mathematical characteristics, such as spectral expansion, pseduorandomness, and sparsity, yield graph transformers with complexity only linear in the size of the graph, while allowing us to prove desirable theoretical properties of the resulting transformer models. We show that incorporating Exphormer into the recently-proposed GraphGPS framework produces models with competitive empirical results on a wide variety of graph datasets, including state-of-the-art results on three datasets. We also show that Exphormer can scale to datasets on larger graphs than shown in previous graph transformer architectures.
Author Information
Hamed Shirzad (University of British Columbia)
Ameya Velingker (Google Research)
Balaji Venkatachalam (Google)
Danica J Sutherland (University of British Columbia)
Ali K Sinop (Google Research)
More from the Same Authors
-
2023 : Scaling Graphically Structured Diffusion Models »
Christian Weilbach · William Harvey · Hamed Shirzad · Frank Wood -
2023 : Efficient Location Sampling Algorithms for Road Networks »
Sara Ahmadian · Kostas Kollias · Ameya Velingker · Sreenivas Gollapudi · Vivek Kumar · Santhoshini Velusamy -
2023 Poster: A Fast, Well-Founded Approximation to the Empirical Neural Tangent Kernel »
Mohamad Amin Mohamadi · Won Bae · Danica J Sutherland -
2023 Poster: Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix Factorization »
Ameya Velingker · Maximilian Vötsch · David Woodruff · Samson Zhou -
2019 Poster: Learning deep kernels for exponential family densities »
Li Kevin Wenliang · D.J. Sutherland · Heiko Strathmann · Arthur Gretton -
2019 Oral: Learning deep kernels for exponential family densities »
Li Kevin Wenliang · D.J. Sutherland · Heiko Strathmann · Arthur Gretton