Invited Talk
in
Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward
Program Synthesis, Program Semantics, and Large Language Models
Charles Sutton
I will describe our experience with two generations of large language models for code at Google. These models show a range of abilities, including generating small programs from natural language descriptions and engaging in dialog about code, incorporating human feedback to improve solutions. However, in a deeper sense, these models seem not to understand the code that they write, in the sense that they are generally unable to predict the output of a program given a specific input. I will discuss our subsequent efforts to improve the "code understanding" abilities of LMs, by asking them to emit intermediate computation steps as tokens onto a "scratchpad". These same models are able to perform complex multi-step computations when asked to perform the operation "step by step", showing the results of intermediate computations, even operations that the LM could not perform directly.