Poster
in
Workshop: Geometry-grounded Representation Learning and Generative Modeling
Revisiting Random Walks for Learning on Graphs
Jinwoo Kim · Olga Zaghen · Ayhan Suleymanzade · Youngmin Ryou · Seunghoon Hong
Keywords: [ random walk ] [ Language Models ] [ invariance ] [ Graph machine learning ] [ universal approximation ]
We revisit a simple idea for machine learning on graphs, where a random walk on a graph produces a machine-readable record, and this record is processed by a deep neural network to directly make vertex-level or graph-level predictions. We call these stochastic machines random walk neural networks, and show that we can design them to be isomorphism invariant while capable of universal approximation of graph functions in probability. Notably, almost any record of random walk guarantees probabilistic invariance as long as vertices are anonymized. This enables us to record random walks in plain text and adopt a language model to read these text records to solve graph tasks. We demonstrate random walk neural networks based on pre-trained language models on several hard problems on graphs: separating strongly regular graphs where 3-WL fails, counting substructures, and transductive classification on arXiv citation network without training. Code is at https://github.com/jw9730/random-walk.