Skip to yearly menu bar Skip to main content


Poster

Memory-Based Model Editing at Scale

Eric Mitchell · Charles Lin · Antoine Bosselut · Christopher Manning · Chelsea Finn

Hall E #237

Keywords: [ MISC: Online Learning, Active Learning and Bandits ] [ DL: Robustness ] [ MISC: Transfer, Multitask and Meta-learning ] [ APP: Language, Speech and Dialog ] [ DL: Algorithms ] [ DL: Everything Else ]


Abstract:

Even the largest neural networks make errors, and once-correct predictions can become invalid as the world changes. Model editors make local updates to the behavior of base (pre-trained) models to inject updated knowledge or correct undesirable behaviors. Existing model editors have shown promise, but also suffer from insufficient expressiveness: they struggle to accurately model an edit's intended scope (examples affected by the edit), leading to inaccurate predictions for test inputs loosely related to the edit, and they often fail altogether after many edits. As a higher-capacity alternative, we propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC), which stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed. To enable more rigorous evaluation of model editors, we introduce three challenging language model editing problems based on question answering, fact-checking, and dialogue generation. We find that only SERAC achieves high performance on all three problems, consistently outperforming existing approaches to model editing by a significant margin. Code, data, and additional project information will be made available at https://sites.google.com/view/serac-editing.

Chat is not available.