Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Medical Unlearnable Examples: Securing Medical Data from Unauthorized Traning via Sparsity-Aware Local Masking

Weixiang Sun · Yixin Liu · Zhiling Yan · Kaidi Xu · Lichao Sun

Keywords: [ Unlearnable Example; Unauthorized Training; Data Protection ]


Abstract:

The rapid expansion of AI in healthcare has led to a surge in medical data generation and storage, boosting medical AI development. However, fears of unauthorized use, like training commercial AI models, hinder researchers from sharing their valuable datasets. To encourage data sharing, one promising solution is to introduce imperceptible noise into the data. This method aims to safeguard the data against unauthorized training by inducing degradation in the generalization ability of the trained model. However, they are not effective and efficient when applied to medical data, mainly due to the ignorance of the sparse nature of medical images. To address this problem, we propose the Sparsity-Aware Local Masking (SALM) method, a novel approach that selectively perturbs significant pixel regions rather than the entire image as previously. This simple yet effective approach, by focusing on local areas, significantly narrows down the search space for disturbances and fully leverages the characteristics of sparsity. Our extensive experiments across various datasets and model architectures demonstrate that SALM effectively prevents unauthorized training of different models and outperforms previous SoTA data protection methods.

Chat is not available.