Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward

Similarity of Pre-trained and Fine-tuned Representations

Thomas Goerttler · Thomas Goerttler · Klaus Obermayer


Abstract:

Most often, in transfer learning, only the last part of the networks - the so-called head - is fine-tuned. Representation similarity analysis shows that the most significant change still occurs in the head even if all weights are updatable. However, recent results from few-shot learning have shown that, especially in the case of cross-domain adaption, representation change in the early layers, which are mostly convolutional, is beneficial. In our paper, we find out if that also holds true for transfer learning. In addition, we analyze the change of representation in transfer learning, both during pre-training and fine-tuning and find out that pre-trained structure is unlearned if not usable.

Chat is not available.