Timezone: »
Machine learning models, especially deep neural networks have been shown to be susceptible to privacy attacks such as membership inference where an adversary can detect whether a data point was used for training a black-box model. Such privacy risks are exacerbated when a model's predictions are used on an unseen data distribution. To alleviate privacy attacks, we demonstrate the benefit of predictive models that are based on the causal relationship between input features and the outcome. We first show that models learnt using causal structure generalize better to unseen data, especially on data from different distributions than the train distribution. Based on this generalization property, we establish a theoretical link between causality and privacy: compared to associational models, causal models provide stronger differential privacy guarantees and are more robust to membership inference attacks. Experiments on simulated Bayesian networks and the colored-MNIST dataset show that associational models exhibit up to 80% attack accuracy under different test distributions and sample sizes whereas causal models exhibit attack accuracy close to a random guess.
Author Information
Shruti Tople (Microsoft Research)
Amit Sharma (Microsoft Research)
Aditya Nori (Microsoft Research Cambridge)
More from the Same Authors
-
2021 : Hierarchical Analysis of Visual COVID-19 Features from Chest Radiographs »
Shruthi Bannur · Ozan Oktay · Melanie Bernhardt · Anton Schwaighofer · Besmira Nushi · Aditya Nori · Javier Alvarez-Valle · Daniel Coelho de Castro -
2021 : DoWhy: Addressing Challenges in Expressing and Validating Causal Assumptions »
Amit Sharma · Vasilis Syrgkanis · cheng zhang · Emre Kiciman -
2022 : Modeling the Data-Generating Process is Necessary for Out-of-Distribution Generalization »
JIVAT NEET KAUR · Emre Kiciman · Amit Sharma -
2022 : Probing Classifiers are Unreliable for Concept Removal and Detection »
Abhinav Kumar · Chenhao Tan · Amit Sharma -
2022 Poster: Matching Learned Causal Effects of Neural Networks with Domain Priors »
Sai Srinivas Kancheti · Gowtham Reddy Abbavaram · Vineeth N Balasubramanian · Amit Sharma -
2022 Spotlight: Matching Learned Causal Effects of Neural Networks with Domain Priors »
Sai Srinivas Kancheti · Gowtham Reddy Abbavaram · Vineeth N Balasubramanian · Amit Sharma -
2021 Poster: Domain Generalization using Causal Matching »
Divyat Mahajan · Shruti Tople · Amit Sharma -
2021 Oral: Domain Generalization using Causal Matching »
Divyat Mahajan · Shruti Tople · Amit Sharma -
2019 Poster: Adaptive Neural Trees »
Ryutaro Tanno · Kai Arulkumaran · Daniel Alexander · Antonio Criminisi · Aditya Nori -
2019 Oral: Adaptive Neural Trees »
Ryutaro Tanno · Kai Arulkumaran · Daniel Alexander · Antonio Criminisi · Aditya Nori -
2018 Poster: Semi-Supervised Learning via Compact Latent Space Clustering »
Konstantinos Kamnitsas · Daniel C. Castro · Loic Le Folgoc · Ian Walker · Ryutaro Tanno · Daniel Rueckert · Ben Glocker · Antonio Criminisi · Aditya Nori -
2018 Oral: Semi-Supervised Learning via Compact Latent Space Clustering »
Konstantinos Kamnitsas · Daniel C. Castro · Loic Le Folgoc · Ian Walker · Ryutaro Tanno · Daniel Rueckert · Ben Glocker · Antonio Criminisi · Aditya Nori