Timezone: »
Data augmentation methods have played an important role in the recent advance of deep learning models, and have become an indispensable component of state-of-the-art models in semi-supervised, self-supervised, and supervised training for vision. Despite incurring no additional latency at test time, data augmentation often requires more epochs of training to be effective. For example, even the simple flips-and-crops augmentation requires training for more than 5 epochs to improve performance, whereas RandAugment requires more than 90 epochs. We propose a general framework called Tied-Augment, which improves the efficacy of data augmentation in a wide range of applications by adding a simple term to the loss that can control the similarity of representations under distortions. Tied-Augment can improve state-of-the-art methods from data augmentation (e.g. RandAugment, mixup), optimization (e.g. SAM), and semi-supervised learning (e.g. FixMatch). For example, Tied-RandAugment can outperform RandAugment by 2.0% on ImageNet. Notably, using Tied-Augment, data augmentation can be made to improve generalization even when training for a few epochs and when fine-tuning. We open source our code at https://github.com/ekurtulus/tied-augment/tree/main.
Author Information
Emirhan Kurtulus (Stanford University)
Emirhan Kurtulus is freshman at Stanford University, loves deep learning. I believe we need to go deeper.
Zichao Li (University of California, Santa Cruz)
Yann Nicolas Dauphin (Google DeepMind)
Ekin Dogus Cubuk (Google Brain)
More from the Same Authors
-
2023 : Predicting Properties of Amorphous Solids with Graph Network Potentials »
Muratahan Aykol · Jennifer Wei · Simon Batzner · Amil Merchant · Ekin Dogus Cubuk -
2023 Poster: SAM operates far from home: eigenvalue regularization as a dynamical phenomenon »
Atish Agarwala · Yann Nicolas Dauphin -
2021 Poster: Learn2Hop: Learned Optimization on Rough Landscapes »
Amil Merchant · Luke Metz · Samuel Schoenholz · Ekin Dogus Cubuk -
2021 Spotlight: Learn2Hop: Learned Optimization on Rough Landscapes »
Amil Merchant · Luke Metz · Samuel Schoenholz · Ekin Dogus Cubuk -
2019 Poster: Adversarial Examples Are a Natural Consequence of Test Error in Noise »
Justin Gilmer · Nicolas Ford · Nicholas Carlini · Ekin Dogus Cubuk -
2019 Oral: Adversarial Examples Are a Natural Consequence of Test Error in Noise »
Justin Gilmer · Nicolas Ford · Nicholas Carlini · Ekin Dogus Cubuk