Skip to yearly menu bar Skip to main content


Poster

Radioactive data: tracing through training

Alexandre Sablayrolles · Douze Matthijs · Cordelia Schmid · Herve Jegou

Keywords: [ Accountability, Transparency and Interpretability ] [ Privacy-preserving Statistics and Machine Learning ]


Abstract:

Data tracing determines whether a particular image dataset has been used to train a model. We propose a new technique, radioactive data, that makes imperceptible changes to this dataset such that any model trained on it will bear an identifiable mark. Given a trained model, our technique detects the use of radioactive data and provides a level of confidence (p-value). Experiments on large-scale benchmarks (Imagenet), with standard architectures (Resnet-18, VGG-16, Densenet-121) and training procedures, show that we detect radioactive data with high confidence (p<0.0001) when only 1% of the data used to trained a model is radioactive. Our radioactive mark is resilient to strong data augmentations and variations of the model architecture. As a result, it offers a much higher signal-to-noise ratio than data poisoning and backdoor methods.

Chat is not available.