Timezone: »
Neural networks are powerful function estimators, leading to their status as a paradigm of choice for modeling structured data. However, unlike other structured representations that emphasize the modularity of the problem – e.g., factor graphs – neural networks are usually monolithic mappings from inputs to outputs, with a fixed computation order. This limitation prevents them from capturing different directions of computation and interaction between the modeled variables. In this paper, we combine the representational strengths of factor graphs and of neural networks, proposing undirected neural networks (UNNs): a flexible framework for specifying computations that can be performed in any order. For particular choices, our proposed models subsume and extend many existing architectures: feed-forward, recurrent, self-attention networks, auto-encoders, and networks with implicit layers. We demonstrate the effectiveness of undirected neural architectures, both unstructured and structured, on a range of tasks: tree-constrained dependency parsing, convolutional image classification, and sequence completion with attention. By varying the computation order, we show how a single UNN can be used both as a classifier and a prototype generator, and how it can fill in missing parts of an input sequence, making them a promising field for further research.
Author Information
Tsvetomila Mihaylova (Instituto de Telecomunicações, Lisbon, Portugal)
Vlad Niculae (University of Amsterdam)
Andre Filipe Torres Martins (Instituto de Telecomunicacoes)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Modeling Structure with Undirected Neural Networks »
Wed. Jul 20th through Thu the 21st Room Hall E #412
More from the Same Authors
-
2021 Poster: Learning Binary Decision Trees by Argmin Differentiation »
Valentina Zantedeschi · Matt J. Kusner · Vlad Niculae -
2021 Spotlight: Learning Binary Decision Trees by Argmin Differentiation »
Valentina Zantedeschi · Matt J. Kusner · Vlad Niculae -
2020 Poster: LP-SparseMAP: Differentiable Relaxed Optimization for Sparse Structured Prediction »
Vlad Niculae · Andre Filipe Torres Martins -
2018 Poster: SparseMAP: Differentiable Sparse Structured Inference »
Vlad Niculae · Andre Filipe Torres Martins · Mathieu Blondel · Claire Cardie -
2018 Oral: SparseMAP: Differentiable Sparse Structured Inference »
Vlad Niculae · Andre Filipe Torres Martins · Mathieu Blondel · Claire Cardie