Language Generation with Feedback: Queries and Mistakes
Abstract
We investigate language generation in the limit (Kleinberg & Mullainathan, 2024; Li et al., 2025) in variants where the generator receives some feedback based on its “actions.” We study two such variants. In the first, which is inspired by Littlestone’s model of online learning, the generator observes whether it made a mistake at each iteration. In the second, introduced by Charikar & Pabbaraju (2025a), the generator can query whether a string belongs to the target language. Our main result is a characterization of collections that are generable with mistake feedback. Using similar techniques, we also characterize when generation is possible in the query model with set-based generators; set-based generators have been studied in several works (Charikar & Pabbaraju, 2025a; Kalavasis et al., 2025; Kleinberg & Wei, 2025a; Li et al., 2025). Beyond the characterizations themselves, we derive several implications. First, our results imply new closure properties for generation with mistake and query feedback. Second, our results show that, under feedback, generation is robust to noise: it remains possible with arbitrary contamination in the adversary’s examples and with finite contamination in the feedback. Third, our techniques also yield new sufficient and necessary conditions for generation without feedback among other implications.