Workshop
Understanding and Improving Generalization in Deep Learning
Dilip Krishnan · Hossein Mobahi · Behnam Neyshabur · Behnam Neyshabur · Peter Bartlett · Dawn Song · Nati Srebro
Grand Ballroom A
Fri 14 Jun, 8:30 a.m. PDT
The 1st workshop on Generalization in Deep Networks: Theory and Practice will be held as part of ICML 2019. Generalization is one of the fundamental problems of machine learning, and increasingly important as deep networks make their presence felt in domains with big, small, noisy or skewed data. This workshop will consider generalization from both theoretical and practical perspectives. We welcome contributions from paradigms such as representation learning, transfer learning and reinforcement learning. The workshop invites researchers to submit working papers in the following research areas:
Implicit regularization: the role of optimization algorithms in generalization
Explicit regularization methods
Network architecture choices that improve generalization
Empirical approaches to understanding generalization
Generalization bounds; empirical evaluation criteria to evaluate bounds
Robustness: generalizing to distributional shift a.k.a dataset shift
Generalization in the context of representation learning, transfer learning and deep reinforcement learning: definitions and empirical approaches
Live content is unavailable. Log in and register to view live content