Skip to yearly menu bar Skip to main content


Talk

Attentive Recurrent Comparators

Pranav Shyam · Shubham Gupta · Ambedkar Dukkipati

Parkside 1

Abstract:

Rapid learning requires flexible representations to quickly adopt to new evidence. We develop a novel class of models called Attentive Recurrent Comparators (ARCs) that form representations of objects by cycling through them and making observations. Using the representations extracted by ARCs, we develop a way of approximating a \textit{dynamic representation space} and use it for one-shot learning. In the task of one-shot classification on the Omniglot dataset, we achieve the state of the art performance with an error rate of 1.5\%. This represents the first super-human result achieved for this task with a generic model that uses only pixel information.

Live content is unavailable. Log in and register to view live content