Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Topology, Algebra, and Geometry in Machine Learning

Nearest Class-Center Simplification through Intermediate Layers

Ido Ben Shaul · Shai Dekel


Abstract:

Recent advances in theoretical Deep Learning have introduced geometric properties that occur during training, past the Interpolation Threshold- where the training error reaches zero. We inquire into the phenomena coined \emph{Neural Collapse} in the intermediate layers of the networks, and emphasize the innerworkings of Nearest Class-Center Mismatch inside the deepnet. We further show that these processes occur both in vision and language model architectures. Lastly, we propose a Stochastic Variability-Simplification Loss (SVSL) that encourages better geometrical features in intermediate layers, and improves both train metrics and generalization.

Chat is not available.