Skip to yearly menu bar Skip to main content


Poster

Learning Universal Predictors

Jordi Grau-Moya · Tim Genewein · Marcus Hutter · Laurent Orseau · Gregoire Deletang · Elliot Catt · Anian Ruoss · Li Kevin Wenliang · Christopher Mattern · Matthew Aitchison · Joel Veness

Hall C 4-9 #2208
[ ]
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

Meta-learning has emerged as a powerful approach to train neural networks to learn new tasks quickly from limited data by pre-training them on a broad set of tasks. But, what are the limits of meta-learning? In this work, we explore the potential of amortizing the most powerful universal predictor, namely Solomonoff Induction (SI), into neural networks via leveraging (memory-based) meta-learning to its limits. We use Universal Turing Machines (UTMs) to generate training data used to expose networks to a broad range of patterns. We provide theoretical analysis of the UTM data generation processes and meta-training protocols. We conduct comprehensive experiments with neural architectures (e.g. LSTMs, Transformers) and algorithmic data generators of varying complexity and universality. Our results suggest that UTM data is a valuable resource for meta-learning, and that it can be used to train neural networks capable of learning universal prediction strategies.

Live content is unavailable. Log in and register to view live content