Skip to yearly menu bar Skip to main content


Poster
in
Workshop: High-dimensional Learning Dynamics Workshop: The Emergence of Structure and Reasoning

Understanding Nonlinear Implicit Bias via Region Counts in Input Space

Jingwei Li · Jing Xu · Zifan Wang · Huishuai Zhang · Jingzhao Zhang


Abstract:

One explanation for the strong generalization ability of neural networks is implicit bias. Yet, the definition and understanding of implicit bias in non-linear contexts remains mysterious. In this work, we propose to characterize implicit bias by the count of connected regions in the input space with the same predicted label. Compared with parameter-dependent metrics (e.g., norm or normalized margin), region count can be better adapted to nonlinear, overparameterized models, because it is determined by the function mapping and is invariant to reparametrization. Empirically, we found that small region counts align with geometrically simple decision boundaries and correlate well with good generalization performance. We also observe that good hyper-parameter choices such as larger learning rates and smaller batch sizes can induce small region counts. We further establish the theoretical connections between region count and the generalization bound, and explain how larger learning rate can induce small region counts in neural networks.

Chat is not available.