Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Over-parameterization: Pitfalls and Opportunities

Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks

Etai Littwin · Omid Saremi · Shuangfei Zhai · Vimal Thilak · Hanlin Goh · Joshua M Susskind · Greg Yang


Abstract:

We analyze the learning dynamics of infinitely wide neural networks with a finite sized bottleneck. Unlike the neural tangent kernel limit, a bottleneck in an otherwise infinite width network allows data dependent feature learning in its bottleneck representation. We empirically show that a single bottleneck in infinite networks dramatically accelerates training when compared to purely infinite networks, with an improved overall performance. We discuss the acceleration phenomena by drawing similarities to infinitely wide deep linear models, where the acceleration effect of a bottleneck can be understood theoretically.