Oral
A Kernel Theory of Modern Data Augmentation
Tri Dao · Albert Gu · Alexander J Ratner · Virginia Smith · Christopher De Sa · Christopher Re

Tue Jun 11th 03:05 -- 03:10 PM @ Room 101

Data augmentation, a technique in which a training set is expanded with class-preserving transformations, is ubiquitous in modern machine learning pipelines. In this paper, we seek to establish a theoretical framework for understanding data augmentation. We approach this from two directions: First, we provide a general model of augmentation as a Markov process, and show that kernels appear naturally with respect to this model, even when we do not employ kernel classification. Next, we analyze more directly the effect of augmentation on kernel classifiers, showing that data augmentation can be approximated by first-order feature averaging and second-order variance regularization components. These frameworks both serve to illustrate the ways in which data augmentation affects the downstream learning model, and the resulting analyses provide novel connections between prior work in invariant kernels, tangent propagation, and robust optimization. Finally, we provide several proof-of-concept applications showing that our theory can be useful for accelerating machine learning workflows, such as reducing the amount of computation needed to train using augmented data, and predicting the utility of a transformation prior to training.

Author Information

Tri Dao (Stanford)
Albert Gu (Stanford University)
Alexander J Ratner (Stanford University)
Virginia Smith (Carnegie Mellon University)
Christopher De Sa (Cornell)
Christopher Re (Stanford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors