Skip to yearly menu bar Skip to main content


Poster

Markpainting: Adversarial Machine Learning meets Inpainting

David G Khachaturov · Ilia Shumailov · Yiren Zhao · Nicolas Papernot · Ross Anderson

Keywords: [ Privacy, Anonymity, and Security ] [ Social Aspects of Machine Learning ]


Abstract:

Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching. Recently, inpainting started being used for watermark removal, raising concerns. In this paper we study how to manipulate it using our markpainting technique. First, we show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information. We find that we can target multiple different models simultaneously with our technique. This can be designed to reconstitute a watermark if the editor had been trying to remove it. Second, we show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove. Markpainting is novel and can be used as a manipulation alarm that becomes visible in the event of inpainting. Source code is available at: https://github.com/iliaishacked/markpainting.

Chat is not available.