Timezone: »
Self-Supervised Learning (SSL) is an increasingly popular ML paradigm that trains models to transform complex inputs into representations without relying on explicit labels. These representations encode similarity structures that enable efficient learning of multiple downstream tasks. Recently, ML-as-a-Service providers have commenced offering trained SSL models over inference APIs, which transform user inputs into useful representations for a fee. However, the high cost involved to train these models and their exposure over APIs both make black-box extraction a realistic security threat. We thus explore model stealing attacks against SSL. Unlike traditional model extraction on classifiers that output labels, the victim models here output representations; these representations are of significantly higher dimensionality compared to the low-dimensional prediction scores output by classifiers. We construct several novel attacks and find that approaches that train directly on a victim's stolen representations are query efficient and enable high accuracy for downstream models. We then show that existing defenses against model extraction are inadequate and not easily retrofitted to the specificities of SSL.
Author Information
Adam Dziedzic (Vector Institute and University of Toronto)
Nikita Dhawan (University of Toronto and Vector Institute)
Muhammad Ahmad Kaleem (University of Toronto)
Jonas Guan (University of Toronto)
Nicolas Papernot (University of Toronto and Vector Institute)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: On the Difficulty of Defending Self-Supervised Learning against Model Extraction »
Wed. Jul 20th through Thu the 21st Room Hall E #1008
More from the Same Authors
-
2023 Poster: Efficient Parametric Approximations of Neural Network Function Space Distance »
Nikita Dhawan · Sicong Huang · Juhan Bae · Roger Grosse -
2021 Poster: Markpainting: Adversarial Machine Learning meets Inpainting »
David G Khachaturov · Ilia Shumailov · Yiren Zhao · Nicolas Papernot · Ross Anderson -
2021 Poster: Label-Only Membership Inference Attacks »
Christopher Choquette-Choo · Florian Tramer · Nicholas Carlini · Nicolas Papernot -
2021 Spotlight: Label-Only Membership Inference Attacks »
Christopher Choquette-Choo · Florian Tramer · Nicholas Carlini · Nicolas Papernot -
2021 Spotlight: Markpainting: Adversarial Machine Learning meets Inpainting »
David G Khachaturov · Ilia Shumailov · Yiren Zhao · Nicolas Papernot · Ross Anderson -
2020 : Panel 1 »
Deborah Raji · Tawana Petty · Nicolas Papernot · Piotr Sapiezynski · Aleksandra Korolova -
2020 : What does it mean for ML to be trustworthy? »
Nicolas Papernot -
2020 Poster: Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations »
Florian Tramer · Jens Behrmann · Nicholas Carlini · Nicolas Papernot · Joern-Henrik Jacobsen