Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Over-parameterization: Pitfalls and Opportunities

Rethinking compactness in deep neural networks

Kateryna Chumachenko · Firas Laakom · Jenni Raitoharju · Alexandros Iosifidis · Moncef Gabbouj

Keywords: [ Computer Vision ]


Abstract:

Deep neural networks are a type of over-parameterized models which are able to achieve high performance despite having typically more parameters than training samples. Recently, there has been an increasing interest in uncovering and understanding the different phenomena that occur in the over-parameterized regime induced by such neural networks. In this paper, we aim to shed light on the relationship between class compactness of the learned feature representations and the model performance. Surprisingly, we find that models that learn more class-invariant features do not necessarily perform better. Moreover, we show that during training, class-wise variance increases and the models learn a less compact and more outspread representation of the classes.