Understanding and Improving Generalization in Deep Learning
Dilip Krishnan · Hossein Mobahi · Behnam Neyshabur · Behnam Neyshabur · Peter Bartlett · Dawn Song · Nati Srebro

Fri Jun 14th 08:30 AM -- 06:00 PM @ Grand Ballroom A
Event URL: »

The 1st workshop on Generalization in Deep Networks: Theory and Practice will be held as part of ICML 2019. Generalization is one of the fundamental problems of machine learning, and increasingly important as deep networks make their presence felt in domains with big, small, noisy or skewed data. This workshop will consider generalization from both theoretical and practical perspectives. We welcome contributions from paradigms such as representation learning, transfer learning and reinforcement learning. The workshop invites researchers to submit working papers in the following research areas:

Implicit regularization: the role of optimization algorithms in generalization
Explicit regularization methods
Network architecture choices that improve generalization
Empirical approaches to understanding generalization
Generalization bounds; empirical evaluation criteria to evaluate bounds
Robustness: generalizing to distributional shift a.k.a dataset shift
Generalization in the context of representation learning, transfer learning and deep reinforcement learning: definitions and empirical approaches

08:30 AM Opening Remarks
08:40 AM Keynote by Dan Roy: Progress on Nonvacuous Generalization Bounds (Invited Talk) Dan Roy
09:20 AM Keynote by Chelsea Finn: Training for Generalization (Invited Talk) Chelsea Finn
09:50 AM A Meta-Analysis of Overfitting in Machine Learning (Spotlight)
10:05 AM Uniform convergence may be unable to explain generalization in deep learning (Spotlight)
10:20 AM Break and Poster Session
10:40 AM Keynote by Sham Kakade: Prediction, Learning, and Memory (Invited Talk) Sham Kakade
11:10 AM Keynote by Mikhail Belkin: A Hard Look at Generalization and its Theories (Invited Talk) Mikhail Belkin
11:40 AM Towards Task and Architecture-Independent Generalization Gap Predictors (Spotlight)
11:55 AM Data-Dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation (Spotlight)
12:10 PM Lunch and Poster Session
01:30 PM Keynote by Aleksander Mądry: Are All Features Created Equal? (Invited Talk) Aleksander Madry
02:00 PM Keynote by Jason Lee: On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization (Invited Talk) Jason Lee
02:30 PM Towards Large Scale Structure of the Loss Landscape of Neural Networks (Spotlight)
02:45 PM Zero-Shot Learning from scratch: leveraging local compositional representations (Spotlight)
03:00 PM Break and Poster Session
03:30 PM Panel Discussion (Nati Srebro, Dan Roy, Chelsea Finn, Mikhail Belkin, Aleksander Mądry, Jason Lee) (Panel Discussion) Nati Srebro, Dan Roy, Chelsea Finn, Mikhail Belkin, Aleksander Madry, Jason Lee
04:30 PM Overparameterization without Overfitting: Jacobian-based Generalization Guarantees for Neural Networks (Spotlight)
04:45 PM How Learning Rate and Delay Affect Minima Selection in AsynchronousTraining of Neural Networks: Toward Closing the Generalization Gap (Spotlight)
05:00 PM Poster Session

Author Information

Dilip Krishnan (Google)
Hossein Mobahi (Google)
Behnam Neyshabur (NYU)
Behnam Neyshabur (Google)
Peter Bartlett (University of California, Berkeley)
Dawn Song (UC Berkeley)

Dawn Song is a Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. Her research interest lies in deep learning, security, and blockchain. She has studied diverse security and privacy issues in computer systems and networks, including areas ranging from software security, networking security, distributed systems security, applied cryptography, blockchain and smart contracts, to the intersection of machine learning and security. She is the recipient of various awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, the George Tallman Ladd Research Award, the Okawa Foundation Research Award, the Li Ka Shing Foundation Women in Science Distinguished Lecture Series Award, the Faculty Research Award from IBM, Google and other major tech companies, and Best Paper Awards from top conferences in Computer Security and Deep Learning. She obtained her Ph.D. degree from UC Berkeley. Prior to joining UC Berkeley as a faculty, she was a faculty at Carnegie Mellon University from 2002 to 2007.

Nati Srebro (Toyota Technological Institute at Chicago)

More from the Same Authors