Skip to yearly menu bar Skip to main content


Poster

Adaptive Text Watermark for Large Language Models

Yepeng Liu · Yuheng Bu


Abstract:

The advancement of Large Language Models (LLMs) has led to increasing concerns about the misuse of AI-generated text, and watermarking for LLM-generated text has emerged as a potential solution.However, it is challenging to generate high-quality watermarked text while maintaining robustness, strong security, and the ability to detect watermarks without prior knowledge of the prompt or model. This paper proposes an adaptive text watermarking strategy to address such a challenge. To improve the text quality and maintain robustness, we adaptively add watermarking to token distributions with high entropy measured by an auxiliary model and keep the low entropy token distributions untouched.For the sake of security and to further minimize the watermark's impact on text quality, instead of using a fixed green/red list generated from a random secret key, which can be vulnerable to decryption and forgery, we adaptively scale up the output logits based on the semantic embedding of previously generated text using a well designed semantic mapping model.Our experiments involving various LLMs demonstrate that our approach achieves comparable robustness performance to existing watermark methods. Additionally, the text generated by our method has perplexity comparable to that of \emph{un-watermarked} LLMs while maintaining sufficient security.

Live content is unavailable. Log in and register to view live content