Workshop
Identifying and Understanding Deep Learning Phenomena
Hanie Sedghi · Samy Bengio · Kenji Hata · Aleksander Madry · Ari Morcos · Behnam Neyshabur · Maithra Raghu · Ali Rahimi · Ludwig Schmidt · Ying Xiao
Sat 15 Jun, 8:30 a.m. PDT
Our understanding of modern neural networks lags behind their practical successes. As this understanding gap grows, it poses a serious challenge to the future pace of progress because fewer pillars of knowledge will be available to designers of models and algorithms. This workshop aims to close this understanding gap in deep learning. It solicits contributions that view the behavior of deep nets as a natural phenomenon to investigate with methods inspired from the natural sciences, like physics, astronomy, and biology. We solicit empirical work that isolates phenomena in deep nets, describes them quantitatively, and then replicates or falsifies them.
As a starting point for this effort, we focus on the interplay between data, network architecture, and training algorithms. We are looking for contributions that identify precise, reproducible phenomena, as well as systematic studies and evaluations of current beliefs such as “sharp local minima do not generalize well” or “SGD navigates out of local minima”. Through the workshop, we hope to catalogue quantifiable versions of such statements, as well as demonstrate whether or not they occur reproducibly.
Schedule
Sat 8:45 a.m. - 9:00 a.m.
|
Opening Remarks
(
Opening Remarks
)
>
|
🔗 |
Sat 9:00 a.m. - 9:30 a.m.
|
Nati Srebro: Optimization’s Untold Gift to Learning: Implicit Regularization
(
Talk
)
>
|
Nati Srebro 🔗 |
Sat 9:30 a.m. - 9:45 a.m.
|
Bad Global Minima Exist and SGD Can Reach Them
(
Spotlight
)
>
|
🔗 |
Sat 9:45 a.m. - 10:00 a.m.
|
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
(
Spotlight
)
>
|
🔗 |
Sat 10:00 a.m. - 10:30 a.m.
|
Chiyuan Zhang: Are all layers created equal? -- Studies on how neural networks represent functions
(
Talk
)
>
|
🔗 |
Sat 10:30 a.m. - 11:00 a.m.
|
Break and Posters
(
Break and Posters
)
>
|
🔗 |
Sat 11:00 a.m. - 11:15 a.m.
|
Line attractor dynamics in recurrent networks for sentiment classification
(
Spotlight
)
>
|
🔗 |
Sat 11:15 a.m. - 11:30 a.m.
|
Do deep neural networks learn shallow learnable examples first?
(
Spotlight
)
>
|
🔗 |
Sat 11:30 a.m. - 12:00 p.m.
|
Crowdsourcing Deep Learning Phenomena
(
Crowdsourcing Deep Learning Phenomena
)
>
|
🔗 |
Sat 12:00 p.m. - 1:30 p.m.
|
Lunch and Posters
(
Lunch and Posters
)
>
|
🔗 |
Sat 1:30 p.m. - 2:00 p.m.
|
Aude Oliva: Reverse engineering neuroscience and cognitive science principles
(
Talk
)
>
|
🔗 |
Sat 2:00 p.m. - 2:15 p.m.
|
On Understanding the Hardness of Samples in Neural Networks
(
Spotlight
)
>
|
🔗 |
Sat 2:15 p.m. - 2:30 p.m.
|
On the Convex Behavior of Deep Neural Networks in Relation to the Layers' Width
(
Spotlight
)
>
|
🔗 |
Sat 2:30 p.m. - 3:00 p.m.
|
Andrew Saxe: Intriguing phenomena in training and generalization dynamics of deep networks
(
Invited talk
)
>
|
Andrew Saxe 🔗 |
Sat 3:00 p.m. - 4:00 p.m.
|
Break and Posters
(
Break and Posters
)
>
|
🔗 |
Sat 4:00 p.m. - 4:30 p.m.
|
Olga Russakovsky
(
Invited Talk
)
>
|
Olga Russakovsky 🔗 |
Sat 4:30 p.m. - 5:30 p.m.
|
Panel Discussion: Kevin Murphy, Nati Srebro, Aude Oliva, Andrew Saxe, Olga Russakovsky Moderator: Ali Rahimi
(
Panel Discussion
)
>
|
🔗 |