Timezone: »

 
Workshop
Over-parameterization: Pitfalls and Opportunities
Yasaman Bahri · Quanquan Gu · Amin Karbasi · Hanie Sedghi

Sat Jul 24 09:00 AM -- 06:20 PM (PDT) @
Event URL: https://sites.google.com/view/icml2021oppo »

Modern machine learning models are often highly over-parameterized. The prime examples are neural network architectures achieving state-of-the-art performance, which have many more parameters than training examples. While these models can empirically perform very well, they are not well understood. Worst-case theories of learnability do not explain their behavior. Indeed, over-parameterized models sometimes exhibit "benign overfitting", i.e., they have the power to perfectly fit training data (even data modified to have random labels), yet they achieve good performance on the test data. There is evidence that over-parameterization may be helpful both computational and statistically, although attempts to use phenomena like double/multiple descent to explain that over-parameterization helps to achieve small test error remain controversial. Besides benign overfitting and double/multiple descent, many other interesting phenomena arise due to over-parameterization, and many more may have yet to be discovered. Many of these effects depend on the properties of data, but we have only simplistic tools to measure, quantify, and understand data. In light of rapid progress and rapidly shifting understanding, we believe that the time is ripe for a workshop focusing on understanding over-parameterization from multiple angles.

Gathertown room1 link: [ protected link dropped ]
Gathertown room2 link: [ protected link dropped ]

Author Information

Yasaman Bahri (Google Brain)
Quanquan Gu (University of California, Los Angeles)
Amin Karbasi (Yale)
Amin Karbasi

Amin Karbasi is currently an assistant professor of Electrical Engineering, Computer Science, and Statistics at Yale University. He has been the recipient of the National Science Foundation (NSF) Career Award 2019, Office of Naval Research (ONR) Young Investigator Award 2019, Air Force Office of Scientific Research (AFOSR) Young Investigator Award 2018, DARPA Young Faculty Award 2016, National Academy of Engineering Grainger Award 2017, Amazon Research Award 2018, Google Faculty Research Award 2016, Microsoft Azure Research Award 2016, Simons Research Fellowship 2017, and ETH Research Fellowship 2013. His work has also been recognized with a number of paper awards, including Medical Image Computing and Computer Assisted Interventions Conference (MICCAI) 2017, International Conference on Artificial Intelligence and Statistics (AISTAT) 2015, IEEE ComSoc Data Storage 2013, International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2011, ACM SIGMETRICS 2010, and IEEE International Symposium on Information Theory (ISIT) 2010 (runner-up). His Ph.D. thesis received the Patrick Denantes Memorial Prize 2013 from the School of Computer and Communication Sciences at EPFL, Switzerland.

Hanie Sedghi (Google)
Hanie Sedghi

Hanie Sedghi a Senior Research Scientist at Google DeepMind where she leads the DeepPhenomena team. The focus of her research has been understanding deep learning models to push their boundaries; not just for (out-of-distribution) generalization, but also the broader sense of algorithmic and scientific reasoning capabilities (of large language models). She is a workshop chair for NeurIPS 2022 as well as tutorial chair for ICML 2022 and 2023, a program chair for CoLLAs 2023 and has been an area chair for NeurIPS, ICLR and ICML and a member of JMLR Editorial board for the last few years. Prior to Google, Hanie was a Research Scientist at Allen Institute for Artificial Intelligence and before that, a postdoctoral fellow at UC Irvine. She received her PhD from University of Southern California with a minor in Mathematics.

More from the Same Authors