Skip to yearly menu bar Skip to main content


Poster

Do Not Train It: A Linear Neural Architecture Search of Graph Neural Networks

Peng XU · Lin Zhang · Xuanzhou Liu · Jiaqi Sun · Yue Zhao · Haiqin Yang · Bei Yu

Exhibit Hall 1 #507

Abstract: Neural architecture search (NAS) for Graph neural networks (GNNs), called NAS-GNNs, has achieved significant performance over manually designed GNN architectures. However, these methods inherit issues from the conventional NAS methods, such as high computational cost and optimization difficulty. More importantly, previous NAS methods have ignored the uniqueness of GNNs, where GNNs possess expressive power without training. With the randomly-initialized weights, we can then seek the optimal architecture parameters via the sparse coding objective and derive a novel NAS-GNNs method, namely neural architecture coding (NAC). Consequently, our NAC holds a no-update scheme on GNNs and can efficiently compute in linear time. Empirical evaluations on multiple GNN benchmark datasets demonstrate that our approach leads to state-of-the-art performance, which is up to $200\times$ faster and $18.8\%$ more accurate than the strong baselines.

Chat is not available.