Skip to yearly menu bar Skip to main content


Poster

Copyright Traps for Large Language Models

Matthieu Meeus · Igor Shilov · Manuel Faysse · Yves-Alexandre de Montjoye


Abstract:

Questions of fair use of copyright-protected content to train Large Language Models (LLMs) is being very actively debated. Document-level inference has been proposed as a new task: inferring from black-box access to the trained model whether a piece of content has been seen during training. SOTA methods however rely on naturally occurring memorization of (part of) the content. While very effective against models that memorize a lot, we hypothesize--and later confirm--that they will not work against models that do not naturally memorize, e.g. medium-size 1B models. We here propose to use copyright traps, the inclusion of fictious entries in original content, to detect the use of copyrighted materials in LLMs with a focus on models where memorization does not naturally occur. We carefully design an experimental setup, randomly inserting traps into original content (books) and train a 1.3B LLM. We first validate that our target model that the use of content would be undetectable using existing methods. We then show, contrary to intuition, that even medium-length trap sentences repeated a significant number of times (100) are not detectable using existing methods. However, we show that longer sequences repeated a large number of time can be reliably detected (AUC=0.75) and used as copyright traps. We further improve these results by studying how number of times a sequence is seen improves detectability, how sequences with higher perplexity tend to be memorized more, and how taking context into account further improves detectability.

Live content is unavailable. Log in and register to view live content