Timezone: »
The problem of automatically generating a computer program from some specification has been studied since the early days of AI. Recently, two competing approaches for 'automatic program learning' have received significant attention: (1) 'neural program synthesis', where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) 'neural program induction', where a neural network generates new outputs directly using a latent program representation. Here, for the first time, we directly compare both approaches on a large-scale, real-world learning task and we additionally contrast to rule-based program synthesis, which uses hand-crafted semantics to guide the program generation. Our neural models use a modified attention RNN to allow encoding of variable-sized sets of I/O pairs, which achieve 92% accuracy on a real-world test set, compared to the 34% accuracy of the previous best neural synthesis approach. The synthesis model also outperforms a comparable induction model on this task, but we more importantly demonstrate that the strength of each approach is highly dependent on the evaluation metric and end-user application. Finally, we show that we can train our neural models to remain very robust to the type of noise expected in real-world data (e.g., typos), while a highly-engineered rule-based system fails entirely.
Author Information
Jacob Devlin (Microsoft Research)
Jonathan Uesato (MIT)
Surya Bhupatiraju (MIT)
Rishabh Singh (Microsoft Research)
Abdelrahman Mohammad (Microsoft)
Pushmeet Kohli (Microsoft Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: RobustFill: Neural Program Learning under Noisy I/O »
Tue. Aug 8th 08:30 AM -- 12:00 PM Room Gallery #118
More from the Same Authors
-
2017 Poster: Learning Continuous Semantic Representations of Symbolic Expressions »
Miltiadis Allamanis · pankajan Chanthirasegaran · Pushmeet Kohli · Charles Sutton -
2017 Poster: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Poster: Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning »
Junhyuk Oh · Satinder Singh · Honglak Lee · Pushmeet Kohli -
2017 Talk: Learning Continuous Semantic Representations of Symbolic Expressions »
Miltiadis Allamanis · pankajan Chanthirasegaran · Pushmeet Kohli · Charles Sutton -
2017 Talk: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Talk: Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning »
Junhyuk Oh · Satinder Singh · Honglak Lee · Pushmeet Kohli -
2017 Poster: Sequence Modeling via Segmentations »
Chong Wang · Yining Wang · Po-Sen Huang · Abdelrahman Mohammad · Dengyong Zhou · Li Deng -
2017 Talk: Sequence Modeling via Segmentations »
Chong Wang · Yining Wang · Po-Sen Huang · Abdelrahman Mohammad · Dengyong Zhou · Li Deng -
2017 Poster: Batched High-dimensional Bayesian Optimization via Structural Kernel Learning »
Zi Wang · Chengtao Li · Stefanie Jegelka · Pushmeet Kohli -
2017 Talk: Batched High-dimensional Bayesian Optimization via Structural Kernel Learning »
Zi Wang · Chengtao Li · Stefanie Jegelka · Pushmeet Kohli