Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Knowledge and Logical Reasoning in the Era of Data-driven Learning

Latent Space Representations of Neural Algorithmic Reasoners

Vladimir V. Mirjanić · Razvan Pascanu · Petar Veličković


Abstract:

Neural Algorithmic Reasoning (NAR) is a research area focused on designing neural architectures that can reliably capture classical computation, largely by learning to execute algorithms. A typical approach is to rely on Graph Neural Network (GNN) architectures, which encode inputs in high-dimensional latent spaces that are repeatedly transformed over the execution of the algorithm. In this work we perform a detailed analysis in order to understand the structure of the latent space induced by the GNN when executing algorithms. We identify two possible failure models: (i) loss of resolution, making it hard to distinguish similar values; (ii) inability to deal with values outside the range observed during training. We propose to solve the first issue by relying on a softmax aggregator, and propose to decay the latent space in order to deal with out-of-range values. We show that these changes lead to improvements on the majority of algorithms from CLRS-30 when using state-of-the-art Triplet-GMPNN processor.

Chat is not available.