Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Over-parameterization: Pitfalls and Opportunities

Some samples are more similar than others! A different look at memorization and generalization in neural networks.

Sudhanshu Ranjan


Abstract:

Neural networks can overfit the noisy training data and still generalize well on the unseen data. Current methods focus on training dynamics or varying the noise in the dataset to understand this phenomenon. We propose a method that allows us to compare the similarity between two sets of samples for a given network. Our approach relies on the weights learned by the network and is independent of the training dynamics & properties of the training dataset. Using the proposed method, we investigate three hypotheses empirically: Are real and noisy samples learned at different parts of the network? Do real and noisy samples contribute equally to the generalization? Are real samples more similar to each other than the noisy samples?