Skip to yearly menu bar Skip to main content


Poster Teaser
in
Workshop: Graph Representation Learning and Beyond (GRL+)

(#27 / Sess. 2) Learning Graph Structure with A Finite-State Automaton Layer

Daniel D Johnson


Abstract:

Graph-based neural network models are producing strong results in a number of domains, in part because graphs provide flexibility to encode domain knowledge in the form of relational structure (edges). In practice, edges may represent intrinsic structure (e.g., abstract syntax trees of programs) or more abstract relations that aid reasoning (e.g., results of program analyses). In this work, we consider learning to derive abstract relations from intrinsic structure. Motivated by their power in program analyses, we consider relations defined by paths accepted by a finite-state automaton. We show how to learn these relations by transforming the problem into learning a policy on a graph-based POMDP with implicit differentiation. The result is a differentiable Graph Finite-State Automaton (GFSA) layer that adds a new edge type (expressed as a weighted adjacency matrix) to a base graph. We demonstrate that this layer performs well when either directly supervised or trained end-to-end for a downstream task.

Teaser video |

Chat is not available.