FLIPS: Instance-Fingerprinting for LLMs via Pseudo-random Sequences
Abstract
Literature reveals that a Large Language Model's (LLM) behavior is not only conditioned by its original weights but also its instance-level parameters, such as instructional prompt, sampling configuration or quantization. A model that generates safe outputs under one configuration may produce toxic content under another. However, current LLM identification techniques (such as fingerprinting) focus on intellectual property protection, and their design favors robustness to changes in these instance-level parameters. This poses a critical challenge for AI regulation in which compliance assessments target actual deployed behaviors, not model provenance. In this paper, we introduce instance-level fingerprinting, a regulator-oriented paradigm that distinguishes configurations of the same LLM. Our method (FLIPS) achieves 90\% identification accuracy across 205 model instances by exploiting biases in binary random generated sequences, compared to 35\% for the adapted baseline LLMmap. Our results demonstrate that instance-level fingerprinting is not only necessary for regulation, but also practically feasible.