Timezone: »

 
Spotlight
Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations
Amin Ghiasi · Hamid Kazemi · Steven Reich · Chen Zhu · Micah Goldblum · Tom Goldstein

Thu Jul 21 10:40 AM -- 10:45 AM (PDT) @ Hall F

Existing techniques for model inversion typically rely on hard-to-tune regularizers, such as total variation or feature regularization, which must be individually calibrated for each network in order to produce adequate images. In this work, we introduce Plug-In Inversion, which relies on a simple set of augmentations and does not require excessive hyper-parameter tuning. Under our proposed augmentation-based scheme, the same set of augmentation hyper-parameters can be used for inverting a wide range of image classification models, regardless of input dimensions or the architecture. We illustrate the practicality of our approach by inverting Vision Transformers (ViTs) and Multi-Layer Perceptrons (MLPs) trained on the ImageNet dataset, tasks which to the best of our knowledge have not been successfully accomplished by any previous works.

Author Information

Amin Ghiasi (University of Maryland)
Hamid Kazemi (University of Maryland - College Park)
Steven Reich (University of Maryland)
Chen Zhu (Google)
Micah Goldblum (New York University)
Tom Goldstein (University of Maryland)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors