Timezone: »
When applying differential privacy to sensitive data, we can often improve performance using external information such as other sensitive data, public data, or human priors. We propose to use the learning-augmented algorithms (or algorithms with predictions) framework---previously applied largely to improve time complexity or competitive ratios---as a powerful way of designing and analyzing privacy-preserving methods that can take advantage of such external information to improve utility. This idea is instantiated on the important task of multiple quantile release, for which we derive error guarantees that scale with a natural measure of prediction quality while (almost) recovering state-of-the-art prediction-independent guarantees. Our analysis enjoys several advantages, including minimal assumptions about the data, a natural way of adding robustness, and the provision of useful surrogate losses for two novel ''meta'' algorithms that learn predictions from other (potentially sensitive) data. We conclude with experiments on challenging tasks demonstrating that learning predictions across one or more instances can lead to large error reductions while preserving privacy.
Author Information
Mikhail Khodak (CMU)
Kareem Amin (Google Research)
Travis Dick (Google)
Sergei Vassilvitskii (Google)
More from the Same Authors
-
2021 : Label differential privacy via clustering »
Hossein Esfandiari · Vahab Mirrokni · Umar Syed · Sergei Vassilvitskii -
2021 : Learning with User-Level Privacy »
Daniel A Levy · Ziteng Sun · Kareem Amin · Satyen Kale · Alex Kulesza · Mehryar Mohri · Ananda Theertha Suresh -
2022 : Meta-Learning Adversarial Bandits »
Nina Balcan · Keegan Harris · Mikhail Khodak · Steven Wu -
2023 Oral: Cross-Modal Fine-Tuning: Align then Refine »
Junhong Shen · Liam Li · Lucio Dery · Corey Staten · Mikhail Khodak · Graham Neubig · Ameet Talwalkar -
2023 Poster: Subset-Based Instance Optimality in Private Estimation »
Travis Dick · Alex Kulesza · Ziteng Sun · Ananda Suresh -
2023 Poster: Predictive Flows for Faster Ford-Fulkerson »
Sami Davies · Benjamin Moseley · Sergei Vassilvitskii · Yuyan Wang -
2023 Poster: Cross-Modal Fine-Tuning: Align then Refine »
Junhong Shen · Liam Li · Lucio Dery · Corey Staten · Mikhail Khodak · Graham Neubig · Ameet Talwalkar -
2023 Poster: Learning-augmented private algorithms for multiple quantile release »
Mikhail Khodak · Kareem Amin · Travis Dick · Sergei Vassilvitskii -
2023 Poster: Label differential privacy and private training data release »
Robert Busa-Fekete · andres munoz · Umar Syed · Sergei Vassilvitskii -
2023 Poster: Speeding Up Bellman Ford via Minimum Violation Permutations »
Silvio Lattanzi · Ola Svensson · Sergei Vassilvitskii -
2020 : Lightning Talks Session 2 »
Jichan Chung · Saurav Prakash · Mikhail Khodak · Ravi Rahman · Vaikkunth Mugunthan · xinwei zhang · Hossein Hosseini -
2020 : 2.7 A Simple Setting for Understanding Neural Architecture Search with Weight-Sharing »
Mikhail Khodak -
2020 Poster: A Sample Complexity Separation between Non-Convex and Convex Meta-Learning »
Nikunj Umesh Saunshi · Yi Zhang · Mikhail Khodak · Sanjeev Arora -
2019 Poster: Bounding User Contributions: A Bias-Variance Trade-off in Differential Privacy »
Kareem Amin · Alex Kulesza · andres munoz · Sergei Vassilvitskii -
2019 Poster: A Theoretical Analysis of Contrastive Unsupervised Representation Learning »
Nikunj Umesh Saunshi · Orestis Plevrakis · Sanjeev Arora · Mikhail Khodak · Hrishikesh Khandeparkar -
2019 Oral: A Theoretical Analysis of Contrastive Unsupervised Representation Learning »
Nikunj Umesh Saunshi · Orestis Plevrakis · Sanjeev Arora · Mikhail Khodak · Hrishikesh Khandeparkar -
2019 Oral: Bounding User Contributions: A Bias-Variance Trade-off in Differential Privacy »
Kareem Amin · Alex Kulesza · andres munoz · Sergei Vassilvitskii -
2019 Poster: Provable Guarantees for Gradient-Based Meta-Learning »
Nina Balcan · Mikhail Khodak · Ameet Talwalkar -
2019 Oral: Provable Guarantees for Gradient-Based Meta-Learning »
Nina Balcan · Mikhail Khodak · Ameet Talwalkar