Skip to yearly menu bar Skip to main content


Workshop

Over-parameterization: Pitfalls and Opportunities

Yasaman Bahri · Quanquan Gu · Amin Karbasi · Hanie Sedghi

Sat 24 Jul, 9 a.m. PDT

Modern machine learning models are often highly over-parameterized. The prime examples are neural network architectures achieving state-of-the-art performance, which have many more parameters than training examples. While these models can empirically perform very well, they are not well understood. Worst-case theories of learnability do not explain their behavior. Indeed, over-parameterized models sometimes exhibit "benign overfitting", i.e., they have the power to perfectly fit training data (even data modified to have random labels), yet they achieve good performance on the test data. There is evidence that over-parameterization may be helpful both computational and statistically, although attempts to use phenomena like double/multiple descent to explain that over-parameterization helps to achieve small test error remain controversial. Besides benign overfitting and double/multiple descent, many other interesting phenomena arise due to over-parameterization, and many more may have yet to be discovered. Many of these effects depend on the properties of data, but we have only simplistic tools to measure, quantify, and understand data. In light of rapid progress and rapidly shifting understanding, we believe that the time is ripe for a workshop focusing on understanding over-parameterization from multiple angles.

Gathertown room1 link: [ protected link dropped ]
Gathertown room2 link: [ protected link dropped ]

Chat is not available.
Timezone: America/Los_Angeles

Schedule