Timezone: »
We propose a new point of view for regularizing deep neural networks by using the norm of a reproducing kernel Hilbert space (RKHS). Even though this norm cannot be computed, it admits upper and lower approximations leading to various practical strategies. Specifically, this perspective (i) provides a common umbrella for many existing regularization principles, including spectral norm and gradient penalties, or adversarial training, (ii) leads to new effective regularization penalties, and (iii) suggests hybrid strategies combining lower and upper bounds to get better approximations of the RKHS norm. We experimentally show this approach to be effective when learning on small datasets, or to obtain adversarially robust models.
Author Information
Alberto Bietti (Inria)
Gregoire Mialon (Inria)
Dexiong Chen (Inria)
Julien Mairal (Inria)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: A Kernel Perspective for Regularizing Deep Neural Networks »
Tue. Jun 11th 09:30 -- 09:35 PM Room Room 101
More from the Same Authors
-
2022 Workshop: Continuous Time Perspectives in Machine Learning »
Mihaela Rosca · Chongli Qin · Julien Mairal · Marc Deisenroth -
2020 Poster: Convolutional Kernel Networks for Graph-Structured Data »
Dexiong Chen · Laurent Jacob · Julien Mairal -
2019 Poster: Estimate Sequences for Variance-Reduced Stochastic Composite Optimization »
Andrei Kulunchakov · Julien Mairal -
2019 Oral: Estimate Sequences for Variance-Reduced Stochastic Composite Optimization »
Andrei Kulunchakov · Julien Mairal -
2019 Invited Talk: Online Dictionary Learning for Sparse Coding »
Julien Mairal · Francis Bach · Jean Ponce · Guillermo Sapiro