From Parameter Dynamics to Risk Scoring: Quantifying Sample-Level Safety Degradation in LLM Fine-tuning
Abstract
Safety alignment of Large Language Models (LLMs) is extremely fragile, fine-tuning on small number of benign samples can erase safety behaviors learned from millions of preference examples. Existing studies attempt to explain this phenomenon by comparing parameters and hidden states before and after fine-tuning, but overlook their dynamic evolution during fine-tuning. In this work, we analyze parameter dynamics and uncover a critical mechanism underlying safety degradation, where benign fine-tuning causes parameters cumulatively drift toward danger-aligned directions, progressively undermining the model's safety. Inspired by these findings, we propose Sample-Level Quantification of Safety Degradation (SQSD), a method that quantifies each training sample's influence on safety degradation. Specifically, SQSD assigns continuous risk scores to individual samples by measuring their induced parameter updates along safety and danger directions. Extensive experiments across three models and two datasets show that SQSD outperforms baselines in better separating high-risk and low-risk samples, with risk scores that consistently predict the severity of safety degradation. In particular, SQSD exhibits strong transferability across architectures, parameter scales, and parameter-efficient methods.