Identifying and Understanding Deep Learning Phenomena
Hanie Sedghi · Samy Bengio · Kenji Hata · Aleksander Madry · Ari Morcos · Behnam Neyshabur · Maithra Raghu · Ali Rahimi · Ludwig Schmidt · Ying Xiao

Sat Jun 15th 08:30 AM -- 06:00 PM @ Hall B

Our understanding of modern neural networks lags behind their practical successes. As this understanding gap grows, it poses a serious challenge to the future pace of progress because fewer pillars of knowledge will be available to designers of models and algorithms. This workshop aims to close this understanding gap in deep learning. It solicits contributions that view the behavior of deep nets as a natural phenomenon to investigate with methods inspired from the natural sciences, like physics, astronomy, and biology. We solicit empirical work that isolates phenomena in deep nets, describes them quantitatively, and then replicates or falsifies them.

As a starting point for this effort, we focus on the interplay between data, network architecture, and training algorithms. We are looking for contributions that identify precise, reproducible phenomena, as well as systematic studies and evaluations of current beliefs such as “sharp local minima do not generalize well” or “SGD navigates out of local minima”. Through the workshop, we hope to catalogue quantifiable versions of such statements, as well as demonstrate whether or not they occur reproducibly.

08:45 AM Opening Remarks
09:00 AM Nati Srebro: Optimization’s Untold Gift to Learning: Implicit Regularization (Talk) Nati Srebro
09:30 AM Bad Global Minima Exist and SGD Can Reach Them (Spotlight)
09:45 AM Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask (Spotlight)
10:00 AM Chiyuan Zhang: Are all layers created equal? -- Studies on how neural networks represent functions (Talk)
10:30 AM Break and Posters
11:00 AM Line attractor dynamics in recurrent networks for sentiment classification (Spotlight)
11:15 AM Do deep neural networks learn shallow learnable examples first? (Spotlight)
11:30 AM Crowdsourcing Deep Learning Phenomena
12:00 PM Lunch and Posters
01:30 PM Aude Oliva: Reverse engineering neuroscience and cognitive science principles (Talk)
02:00 PM On Understanding the Hardness of Samples in Neural Networks (Spotlight)
02:15 PM On the Convex Behavior of Deep Neural Networks in Relation to the Layers' Width (Spotlight)
02:30 PM Andrew Saxe: Intriguing phenomena in training and generalization dynamics of deep networks (Invited talk) Andrew Saxe
03:00 PM Break and Posters
04:00 PM Olga Russakovsky (Invited Talk)
04:30 PM Panel Discussion: Kevin Murphy, Nati Srebro, Aude Oliva, Andrew Saxe, Olga Russakovsky Moderator: Ali Rahimi (Panel Discussion)

Author Information

Hanie Sedghi (Google Brain)
Samy Bengio (Google Research Brain Team)
Kenji Hata (Google)
Aleksander Madry (MIT)
Ari Morcos (Facebook AI Research (FAIR))
Behnam Neyshabur (Google)
Maithra Raghu (Cornell University / Google Brain)
Ali Rahimi (Google)
Ludwig Schmidt (University of California, Berkeley)
Ying Xiao (Google)

More from the Same Authors