Timezone: »
Gaussian processes (GPs) are non-parametric Bayesian models that are widely used for diverse prediction tasks. Previous work in adding strong privacy protection to GPs via differential privacy (DP) has been limited to protecting only the privacy of the prediction targets (model outputs) but not inputs. We break this limitation by introducing GPs with DP protection for both model inputs and outputs. We achieve this by using sparse GP methodology and publishing a private variational approximation on known inducing points. The approximation covariance is adjusted to approximately account for the added uncertainty from DP noise. The approximation can be used to compute arbitrary predictions using standard sparse GP techniques. We propose a method for hyperparameter learning using a private selection protocol applied to validation set log-likelihood. Our experiments demonstrate that given sufficient amount of data, the method can produce accurate models under strong privacy protection.
Author Information
Antti Honkela (University of Helsinki)
More from the Same Authors
-
2021 : Tight Accounting in the Shuffle Model of Differential Privacy »
Antti Koskela · Mikko A Heikkilä · Antti Honkela -
2021 : Differentially Private Hamiltonian Monte Carlo »
Ossi Räisä · Antti Koskela · Antti Honkela -
2021 : Computing Differential Privacy Guarantees for Heterogeneous Compositions Using FFT »
Antti Koskela · Antti Honkela -
2021 Poster: Differentially Private Bayesian Inference for Generalized Linear Models »
Tejas Kulkarni · Joonas Jälkö · Antti Koskela · Samuel Kaski · Antti Honkela -
2021 Spotlight: Differentially Private Bayesian Inference for Generalized Linear Models »
Tejas Kulkarni · Joonas Jälkö · Antti Koskela · Samuel Kaski · Antti Honkela -
2017 Workshop: Private and Secure Machine Learning »
Antti Honkela · Kana Shimizu · Samuel Kaski