Poster
in
Workshop: ICML 2024 Workshop on Foundation Models in the Wild
Strong Copyright Protection for Language Models via Adaptive Model Fusion
Javier Abad · Konstantin Donhauser · Francesco Pinto · Fanny Yang
Keywords: [ Safety ] [ reliability ] [ Language Models ] [ copyright ] [ model fusion ]
The risk of language models unintentionally reproducing copyrighted material from their training data has led to the development of various protective measures. In this paper, we propose model fusion as an effective solution to safeguard against copyright infringement. In particular, we introduce Copyright-Protecting Fusion (CP-Fuse), an algorithm that adaptively combines language models to minimize the reproduction of protected materials. CP-Fuse is inspired by the recently proposed Near-Access Free (NAF) framework and additionally incorporates a desirable balancing property that we demonstrate prevents the reproduction of memorized training data. Our results show that CP-Fuse significantly reduces the memorization of copyrighted content while maintaining high-quality text and code generation. Furthermore, we demonstrate how CP-Fuse can be integrated with other techniques for enhanced protection.