Timezone: »

 
Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
Nenad Tomasev · Ioana Bica · Brian McWilliams · Lars Buesing · Razvan Pascanu · Charles Blundell · Jovana Mitrovic
Event URL: https://openreview.net/forum?id=oNIKfCtr8wH »

Despite recent progress made by self-supervised methods in representation learning with residual networks, they still underperform supervised learning on the ImageNet classification benchmark. To address this, we propose a novel self-supervised representation learning method Representation Learning via Invariant Causal Mechanisms v2 (ReLICv2) (based on ReLIC (Mitrovic et al., 2021)) which explicitly enforces invariance over spurious features such as background and object style. We conduct an extensive experimental evaluation across a varied set of datasets, learning settings and tasks. ReLICv2 achieves 77.1% top-1 accuracy on ImageNet using linear evaluation with a ResNet50 architecture and 80.6% with larger ResNet models, outperforming previous state-of-the-art self-supervised approaches by a wide margin. Moreover, we show a relative overall improvement exceeding +5% over the supervised baseline in the transfer setting and the ability to learn more robust representations than self-supervised and supervised models. Most notably, ReLICv2 is the first unsupervised representation learning method to consistently outperform a standard supervised baseline in a like-for-like comparison across a wide range of ResNet architectures. Finally, we show that despite using ResNet encoders, ReLICv2 is comparable to state-of-the-art self-supervised vision transformers.

Author Information

Nenad Tomasev (DeepMind)
Nenad Tomasev

My projects incorporate elements of both fundamental and applied machine learning research, predominantly aimed at solving big open problems in sciences and healthcare. Having obtained my PhD in high-dimensional machine learning from JSI, I joined Google first as an intern in the Chrome data team in Montreal, and subsequently as a member of the Email Intelligence team in GMail. I joined DeepMind in January 2016 as one of the initial members of the newly formed Health AI team, where we went on to solve a series of impactful open problems in medicine. We developed a system for identifying early signs of retinal disease from 3D optical coherence tomography imaging, as well as developing a universal pipeline for identifying early signs of patient deterioration in electronic health records, being the first group to report a clinically applicable level of performance for the early prediction of acute kidney injury, which was subsequently published in Nature. Around this time, I started doing research with AlphaZero, in our ongoing collaboration with the 14th world chess champion, GM Vladimir Kramnik. We used AlphaZero to effectively prototype a number of new variants of the game of chess and examine the emergent patterns and the underlying dynamics. These new chess variants were implemented on a number of online chess portals, including Chess.com. Top-level tournaments already started experimenting with featuring these versions of the game, most notably the Kramnik-Anand clash held in Dortmund in 2021. In a more recent study, we develop an explainability framework that opens up the black box of AlphaZero's artificial neural network to automatically identify and characterize the encoded conceptual knowledge and how it is used in decision making - hoping to facilitate knowledge transfer between the humans and the machine. This sequence of applied projects was coupled with more fundamental research aimed primarily at representation learning in deep neural networks, as well as sociotechnical studies towards framing AI as a tool for achieving social good, in a robust, fair, and equitable way.

Ioana Bica (University of Oxford)
Brian McWilliams (DeepMind)
Lars Buesing (Deepmind)
Razvan Pascanu (DeepMind)
Charles Blundell (DeepMind)
Jovana Mitrovic (DeepMind)

More from the Same Authors