When Shared Knowledge Hurts: Spectral Over-Accumulation in Model Merging
Yayuan Li ⋅ Ze Peng ⋅ Jian Zhang ⋅ Jintao Guo ⋅ Yue Duan ⋅ Yinghuan Shi
Abstract
Model merging combines multiple fine-tuned models into a single model by $\textit{adding}$ their weight updates, providing a lightweight alternative to retraining. Existing methods primarily target resolving conflicts between task updates, leaving the failure mode of over-counting shared knowledge unaddressed. We show that when tasks share aligned spectral directions (\ie, overlapping singular vectors), a simple linear combination repeatedly accumulates these directions, inflating the singular values and biasing the merged model toward shared subspaces. To mitigate this issue, we propose Singular Value Calibration (SVC), a training-free and data-free post-processing method that quantifies subspace overlap and rescales inflated singular values to restore a balanced spectrum. Across vision and language benchmarks, SVC consistently improves strong merging baselines and achieves state-of-the-art performance. Furthermore, by modifying only the singular values, SVC improves the performance of Task Arithmetic by 13.0\%.
Successful Page Load