Timezone: »

Plex: Towards Reliability using Pretrained Large Model Extensions
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · Jie Ren · Joost van Amersfoort · Kehang Han · E. Kelly Buchanan · Kevin Murphy · Mark Collier · Mike Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani

A recent trend in artificial intelligence (AI) is the use of pretrained models for language and vision tasks, which has achieved extraordinary performance but also puzzling failures. Examining tasks that probe the model’s abilities in diverse ways is therefore critical to the field. In this paper, we explore the reliability of models, where we define a reliable model as one that not only achieves strong predictive performance but also performs well consistently over many decision-making tasks such as uncertainty (e.g., selective prediction, open set recognition), robust generalization (e.g., accuracy and proper scoring rules such as log-likelihood on in- and out-of-distribution datasets), and adaptation (e.g., active learning, few-shot learning). We devise 10 types of tasks over 36 datasets in order to evaluate different aspects of reliability on both vision and language domains. To improve reliability, we developed ViT-Plex and T5-Plex, pretrained large model extensions (plex) for vision and language modalities, respectively. Plex greatly improves the state-of-the-art across tasks, and simplifies the traditional protocol as it does not require designing scores or tuning the model for each individual task. We demonstrate scaling effects over model sizes and pretraining dataset sizes up to 4 billion examples. We also demonstrate Plex’s capabilities on challenging tasks including zero-shot open set recognition, few-shot uncertainty, and uncertainty in conversational language understanding.

Author Information

Dustin Tran (Google Brain)
Andreas Kirsch (University of Oxford)
Balaji Lakshminarayanan (Google Brain)
Huiyi Hu (DeepMind)
Du Phan (Google)
D. Sculley (Google)
Jasper Snoek (Google Brain)
Jeremiah Liu (Google Research)
Jie Ren (Google Brain)
Joost van Amersfoort (University of Oxford)
Kehang Han (Google)
E. Kelly Buchanan (Columbia University)
Kevin Murphy (Google Brain)
Mark Collier (Google)
Mike Dusenberry (Google)
Neil Band (University of Oxford)
Nithum Thain (Google)
Rodolphe Jenatton (Google Research)
Tim G. J Rudner (University of Oxford)
Yarin Gal (University of Oxford)
Zachary Nado (Google Research, Brain Team)
Zelda Mariet (Google Inc.)
Zi Wang (Google Brain)
Zoubin Ghahramani (University of Cambridge & Uber)

Zoubin Ghahramani is a Professor at the University of Cambridge, and Chief Scientist at Uber. He is also Deputy Director of the Leverhulme Centre for the Future of Intelligence, was a founding Director of the Alan Turing Institute and co-founder of Geometric Intelligence (now Uber AI Labs). His research focuses on probabilistic approaches to machine learning and AI. In 2015 he was elected a Fellow of the Royal Society.

More from the Same Authors