Timezone: »
Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching. Recently, inpainting started being used for watermark removal, raising concerns. In this paper we study how to manipulate it using our markpainting technique. First, we show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information. We find that we can target multiple different models simultaneously with our technique. This can be designed to reconstitute a watermark if the editor had been trying to remove it. Second, we show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove. Markpainting is novel and can be used as a manipulation alarm that becomes visible in the event of inpainting. Source code is available at: https://github.com/iliaishacked/markpainting.
Author Information
David G Khachaturov (University of Cambridge)
Ilia Shumailov (University of Cambridge)
Yiren Zhao (University of Cambridge)
Nicolas Papernot (University of Toronto and Vector Institute)
Ross Anderson (University of Cambridge)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Markpainting: Adversarial Machine Learning meets Inpainting »
Thu. Jul 22nd 04:00 -- 06:00 PM Room
More from the Same Authors
-
2022 Poster: DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning »
Robert Hönig · Yiren Zhao · Robert Mullins -
2022 Poster: On the Difficulty of Defending Self-Supervised Learning against Model Extraction »
Adam Dziedzic · Nikita Dhawan · Muhammad Ahmad Kaleem · Jonas Guan · Nicolas Papernot -
2022 Spotlight: On the Difficulty of Defending Self-Supervised Learning against Model Extraction »
Adam Dziedzic · Nikita Dhawan · Muhammad Ahmad Kaleem · Jonas Guan · Nicolas Papernot -
2022 Spotlight: DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning »
Robert Hönig · Yiren Zhao · Robert Mullins -
2022 Poster: Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems »
Yue Gao · Ilia Shumailov · Kassem Fawaz -
2022 Oral: Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems »
Yue Gao · Ilia Shumailov · Kassem Fawaz -
2021 Poster: Label-Only Membership Inference Attacks »
Christopher Choquette-Choo · Florian Tramer · Nicholas Carlini · Nicolas Papernot -
2021 Spotlight: Label-Only Membership Inference Attacks »
Christopher Choquette-Choo · Florian Tramer · Nicholas Carlini · Nicolas Papernot -
2020 : Panel 1 »
Deborah Raji · Tawana Petty · Nicolas Papernot · Piotr Sapiezynski · Aleksandra Korolova -
2020 : What does it mean for ML to be trustworthy? »
Nicolas Papernot -
2020 Poster: Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations »
Florian Tramer · Jens Behrmann · Nicholas Carlini · Nicolas Papernot · Joern-Henrik Jacobsen