Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Knowledge and Logical Reasoning in the Era of Data-driven Learning

Modeling Human Few-Shot Learning using Bayesian Inference over Natural Language

Kevin Ellis


Abstract:

We give a computational model of how humans learn abstract symbolic concepts from few examples. Our model performs Bayesian inference over utterances in natural language. For efficient inference, it uses a large language model as a proposal distribution, and can be fit to human data in order to tune its prior to match human patterns of generalization. We evaluate our model on a generative concept learning setup, as well as a logical concept learning domain.

Chat is not available.