Timezone: »
I will describe our experience with two generations of large language models for code at Google. These models show a range of abilities, including generating small programs from natural language descriptions and engaging in dialog about code, incorporating human feedback to improve solutions. However, in a deeper sense, these models seem not to understand the code that they write, in the sense that they are generally unable to predict the output of a program given a specific input. I will discuss our subsequent efforts to improve the "code understanding" abilities of LMs, by asking them to emit intermediate computation steps as tokens onto a "scratchpad". These same models are able to perform complex multi-step computations when asked to perform the operation "step by step", showing the results of intermediate computations, even operations that the LM could not perform directly.
Author Information
Charles Sutton (Google)
More from the Same Authors
-
2022 : Session 3: New Computational Technologies for Reasoning »
Armando Solar-Lezama · Guy Van den Broeck · Jan-Willem van de Meent · Charles Sutton -
2019 : Panel Discussion »
Wenpeng Zhang · Charles Sutton · Liam Li · Rachel Thomas · Erin LeDell -
2019 : Keynote by Charles Sutton: Towards Semi-Automated Machine Learning »
Charles Sutton -
2017 Poster: Learning Continuous Semantic Representations of Symbolic Expressions »
Miltiadis Allamanis · pankajan Chanthirasegaran · Pushmeet Kohli · Charles Sutton -
2017 Talk: Learning Continuous Semantic Representations of Symbolic Expressions »
Miltiadis Allamanis · pankajan Chanthirasegaran · Pushmeet Kohli · Charles Sutton