Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Beyond Bayes: Paths Towards Universal Reasoning Systems

P31: Can Humans Do Less-Than-One-Shot Learning?

Maya Malaviya


Abstract:

Authors: Maya Malaviya, Ilia Sucholutsky, Kerem Oktar, Thomas L. Griffiths

Abstract: Being able to learn from small amounts of data is a key characteristic of human intelligence, but exactly {\em how} small? In this paper, we introduce a novel experimental paradigm that allows us to examine classification in an extremely data-scarce setting, asking whether humans can learn more categories than they have exemplars (i.e., can humans do ``less-than-one shot'' learning?). An experiment conducted using this paradigm reveals that people are capable of learning in such settings, and provides several insights into underlying mechanisms. First, people can accurately infer and represent high-dimensional feature spaces from very little data. Second, having inferred the relevant spaces, people use a form of prototype-based categorization (as opposed to exemplar-based) to make categorical inferences. Finally, systematic, machine-learnable patterns in responses indicate that people may have efficient inductive biases for dealing with this class of data-scarce problems.

Chat is not available.