Skip to yearly menu bar Skip to main content


Poster
in
Workshop: HiLD: High-dimensional Learning Dynamics Workshop

Latent State Transitions in Training Dynamics

Michael Hu · Angelica Chen · Naomi Saphra · Kyunghyun Cho


Abstract: Neural networks have been shown to go through a number of distinct phases during training. Noted examples include grokking, where the model learns to generalize only after it memorizes the training data, and the induction head phase, where a language model's ability to copy from context dramatically improves. We seek an automated and principled framework for relating phase transitions, or sharp changes in a performance metric, to changes in the neural network itself. To this end, we propose to model changes in the neural network as latent state transitions using the hidden Markov model (HMM). Under this framework, we compute a variety of summary statistics for each training checkpoint and allow the HMM to infer state transitions in an unsupervised manner. We then inspect these inferred state transitions to better understand the phase transitions that neural networks go through during training. By applying the HMM to two grokking tasks and language modeling, we show that the HMM recovers the $L_2$ norm as an important metric for generalization in both tasks of grokking and relate variations in language modeling and grokking loss across training runs to specific changes in metrics over the neural network's weights.

Chat is not available.