Timezone: »
Combining abstract, symbolic reasoning with continuous neural reasoning is a grand challenge of representation learning. As a step in this direction, we propose a new architecture, called neural equivalence network, for the problem of learning continuous semantic representations of algebraic and logical expressions. These networks are trained to represent semantic equivalence, even of expressions that are syntactically very different. The challenge is that semantic representations must be computed in a syntax-directed manner, because semantics is compositional, but at the same time, small changes in syntax can lead to very large changes in semantics, which can be difficult for continuous neural architectures. We perform an exhaustive evaluation on the task of checking equivalence on a highly diverse class of symbolic algebraic and boolean expression types, showing that our model significantly outperforms existing architectures.
Author Information
Miltiadis Allamanis (Microsoft Research)
pankajan Chanthirasegaran
Pushmeet Kohli (Microsoft Research)
Charles Sutton (University of Edinburgh)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Learning Continuous Semantic Representations of Symbolic Expressions »
Wed. Aug 9th 08:30 AM -- 12:00 PM Room Gallery #97
More from the Same Authors
-
2022 : Program Synthesis, Program Semantics, and Large Language Models »
Charles Sutton -
2022 : Session 3: New Computational Technologies for Reasoning »
Armando Solar-Lezama · Guy Van den Broeck · Jan-Willem van de Meent · Charles Sutton -
2019 : Panel Discussion »
Wenpeng Zhang · Charles Sutton · Liam Li · Rachel Thomas · Erin LeDell -
2019 : Keynote by Charles Sutton: Towards Semi-Automated Machine Learning »
Charles Sutton -
2017 Poster: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Poster: Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning »
Junhyuk Oh · Satinder Singh · Honglak Lee · Pushmeet Kohli -
2017 Talk: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Talk: Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning »
Junhyuk Oh · Satinder Singh · Honglak Lee · Pushmeet Kohli -
2017 Poster: RobustFill: Neural Program Learning under Noisy I/O »
Jacob Devlin · Jonathan Uesato · Surya Bhupatiraju · Rishabh Singh · Abdelrahman Mohammad · Pushmeet Kohli -
2017 Talk: RobustFill: Neural Program Learning under Noisy I/O »
Jacob Devlin · Jonathan Uesato · Surya Bhupatiraju · Rishabh Singh · Abdelrahman Mohammad · Pushmeet Kohli -
2017 Poster: Batched High-dimensional Bayesian Optimization via Structural Kernel Learning »
Zi Wang · Chengtao Li · Stefanie Jegelka · Pushmeet Kohli -
2017 Talk: Batched High-dimensional Bayesian Optimization via Structural Kernel Learning »
Zi Wang · Chengtao Li · Stefanie Jegelka · Pushmeet Kohli