Skip to yearly menu bar Skip to main content


Poster

Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss

Zhenlong Liu · Lei Feng · HUIPING ZHUANG · Xiaofeng Cao · Hongxin Wei

Hall C 4-9 #2917
[ ] [ Paper PDF ]
[ Slides [ Poster
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

Machine learning models are susceptible to membership inference attacks (MIAs), which aim to infer whether a sample is in the training set. Existing work utilizes gradient ascent to enlarge the loss variance of training data, alleviating the privacy risk. However, optimizing toward a reverse direction may cause the model parameters to oscillate near local minima, leading to instability and suboptimal performance. In this work, we propose a novel method -- Convex Concave Loss (CCL), which enables a high variance of training loss distribution by gradient descent. Our method is motivated by the theoretical analysis that convex losses tend to decrease the loss variance during training. Thus, our key idea behind CCL is to reduce the convexity of loss functions with a concave term. Trained with CCL, neural networks produce losses with high variance for training data, reinforcing the defense against MIAs. Extensive experiments demonstrate the superiority of CCL, achieving a state-of-the-art balance in the privacy-utility trade-off.

Chat is not available.