Timezone: »

 
Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks
Etai Littwin · Omid Saremi · Shuangfei Zhai · Vimal Thilak · Hanlin Goh · Joshua M Susskind · Greg Yang

We analyze the learning dynamics of infinitely wide neural networks with a finite sized bottleneck. Unlike the neural tangent kernel limit, a bottleneck in an otherwise infinite width network allows data dependent feature learning in its bottleneck representation. We empirically show that a single bottleneck in infinite networks dramatically accelerates training when compared to purely infinite networks, with an improved overall performance. We discuss the acceleration phenomena by drawing similarities to infinitely wide deep linear models, where the acceleration effect of a bottleneck can be understood theoretically.

Author Information

Etai Littwin (Apple)
Omid Saremi (Apple Inc.)
Shuangfei Zhai (Apple)
Vimal Thilak (Apple)
Hanlin Goh (Apple)
Joshua M Susskind (Apple, Inc.)
Greg Yang (Microsoft Research)

More from the Same Authors