Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Leveraging Multi-Color Spaces as a Defense Mechanism Against Model Inversion Attack

Sofiane Ouaari · Ali Burak Ünal · Mete Akgün · Nico Pfeifer

Keywords: [ Representation Learning ] [ Model Inversion Attack ] [ Multimodal Autoencoders ] [ Color Spaces ] [ Safe Machine Learning Systems ]


Abstract:

Privacy is of increasing importance in the world of machine learning in general and in healthcare more specifically due to the nature of patients data. Multiple type of security attacks and mechanisms already exist which allow adversaries to extract sensitive information based only from a high-level interaction with a trained machine learning model. This paper specifically addresses the model inversion attack, which aims to reconstruct input data from a model's output.This paper describes a novel approach of using multi-color spaces as a defense mechanism against this type of attack to strengthen the privacy of open source models trained on image data. The main idea of our approach is to use a combination of those color spaces to create a more generic representation and reduce the quality of the reconstruction coming from a model inversion attack while maintaining a good classification performance. We evaluate the privacy-utility ratio of our proposedsecurity method on retina images.

Chat is not available.