Most often, in transfer learning, only the last part of the networks - the so-called head - is fine-tuned. Representation similarity analysis shows that the most significant change still occurs in the head even if all weights are updatable. However, recent results from few-shot learning have shown that, especially in the case of cross-domain adaption, representation change in the early layers, which are mostly convolutional, is beneficial. In our paper, we find out if that also holds true for transfer learning. In addition, we analyze the change of representation in transfer learning, both during pre-training and fine-tuning and find out that pre-trained structure is unlearned if not usable.