Timezone: »

Identifying and Understanding Deep Learning Phenomena
Hanie Sedghi · Samy Bengio · Kenji Hata · Aleksander Madry · Ari Morcos · Behnam Neyshabur · Maithra Raghu · Ali Rahimi · Ludwig Schmidt · Ying Xiao

Sat Jun 15 08:30 AM -- 06:00 PM (PDT) @ Hall B

Our understanding of modern neural networks lags behind their practical successes. As this understanding gap grows, it poses a serious challenge to the future pace of progress because fewer pillars of knowledge will be available to designers of models and algorithms. This workshop aims to close this understanding gap in deep learning. It solicits contributions that view the behavior of deep nets as a natural phenomenon to investigate with methods inspired from the natural sciences, like physics, astronomy, and biology. We solicit empirical work that isolates phenomena in deep nets, describes them quantitatively, and then replicates or falsifies them.

As a starting point for this effort, we focus on the interplay between data, network architecture, and training algorithms. We are looking for contributions that identify precise, reproducible phenomena, as well as systematic studies and evaluations of current beliefs such as “sharp local minima do not generalize well” or “SGD navigates out of local minima”. Through the workshop, we hope to catalogue quantifiable versions of such statements, as well as demonstrate whether or not they occur reproducibly.

Author Information

Hanie Sedghi (Google Brain)
Samy Bengio (Google Research Brain Team)
Kenji Hata (Google)
Aleksander Madry (MIT)
Ari Morcos (Facebook AI Research (FAIR))
Behnam Neyshabur (Google)
Maithra Raghu (Cornell University / Google Brain)
Ali Rahimi (Google)
Ludwig Schmidt (University of California, Berkeley)
Ying Xiao (Google)

More from the Same Authors