Timezone: »
Machine learning models can leak information about the data used to train them. To mitigate this issue, Differentially Private (DP) variants of optimization algorithms like Stochastic Gradient Descent (DP-SGD) have been designed to trade-off utility for privacy in Empirical Risk Minimization (ERM) problems. In this paper, we propose Differentially Private proximal Coordinate Descent (DP-CD), a new method to solve composite DP-ERM problems. We derive utility guarantees through a novel theoretical analysis of inexact coordinate descent. Our results show that, thanks to larger step sizes, DP-CD can exploit imbalance in gradient coordinates to outperform DP-SGD. We also prove new lower bounds for composite DP-ERM under coordinate-wise regularity assumptions, that are nearly matched by DP-CD. For practical implementations, we propose to clip gradients using coordinate-wise thresholds that emerge from our theory, avoiding costly hyperparameter tuning. Experiments on real and synthetic data support our results, and show that DP-CD compares favorably with DP-SGD.
Author Information
Paul Mangold (Inria)
Aurélien Bellet (INRIA)
Joseph Salmon (Université de Montpellier)
Marc Tommasi
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Differentially Private Coordinate Descent for Composite Empirical Risk Minimization »
Wed. Jul 20th 07:50 -- 07:55 PM Room Room 307
More from the Same Authors
-
2022 Poster: Stochastic smoothing of the top-K calibrated hinge loss for deep imbalanced classification »
Camille Garcin · Maximilien Servajean · Alexis Joly · Joseph Salmon -
2022 Spotlight: Stochastic smoothing of the top-K calibrated hinge loss for deep imbalanced classification »
Camille Garcin · Maximilien Servajean · Alexis Joly · Joseph Salmon -
2020 Poster: Implicit differentiation of Lasso-type models for hyperparameter optimization »
Quentin Bertrand · Quentin Klopfenstein · Mathieu Blondel · Samuel Vaiter · Alexandre Gramfort · Joseph Salmon -
2019 Poster: Optimal Mini-Batch and Step Sizes for SAGA »
Nidham Gazagnadou · Robert Gower · Joseph Salmon -
2019 Poster: Screening rules for Lasso with non-convex Sparse Regularizers »
alain rakotomamonjy · Gilles Gasso · Joseph Salmon -
2019 Oral: Optimal Mini-Batch and Step Sizes for SAGA »
Nidham Gazagnadou · Robert Gower · Joseph Salmon -
2019 Oral: Screening rules for Lasso with non-convex Sparse Regularizers »
alain rakotomamonjy · Gilles Gasso · Joseph Salmon -
2019 Poster: Safe Grid Search with Optimal Complexity »
Eugene Ndiaye · Tam Le · Olivier Fercoq · Joseph Salmon · Ichiro Takeuchi -
2019 Oral: Safe Grid Search with Optimal Complexity »
Eugene Ndiaye · Tam Le · Olivier Fercoq · Joseph Salmon · Ichiro Takeuchi -
2018 Poster: A Probabilistic Theory of Supervised Similarity Learning for Pointwise ROC Curve Optimization »
Robin Vogel · Aurélien Bellet · Stéphan Clémençon -
2018 Oral: A Probabilistic Theory of Supervised Similarity Learning for Pointwise ROC Curve Optimization »
Robin Vogel · Aurélien Bellet · Stéphan Clémençon