Timezone: »
Differentially Private Monotone Submodular Maximization Under Matroid and Knapsack Constraints
Omid Sadeghi · Maryam Fazel
Numerous tasks in machine learning and artificial intelligence have been modeled as submodular maximization problems. These problems usually involve sensitive data about individuals, and in addition to maximizing the utility, privacy concerns should be considered. In this paper, we study the general framework of non-negative monotone submodular maximization subject to matroid or knapsack constraints in both offline and online settings. For the offline setting, we propose a differentially private $(1-\frac{\kappa}{e})$-approximation algorithm, where $\kappa\in[0,1]$ is the total curvature of the submodular set function, which improves upon prior works in terms of approximation guarantee and query complexity under the same privacy budget. In the online setting, we propose the first differentially private algorithm, and we specify the conditions under which the regret bound scales as $\O(\sqrt{T})$, i.e., privacy could be ensured while maintaining the same regret bound as the optimal regret guarantee in the non-private setting.
Author Information
Omid Sadeghi (University of Washington)
Maryam Fazel (University of Washington)
More from the Same Authors
-
2021 : Improved Regret Bounds for Online Submodular Maximization »
Omid Sadeghi · Maryam Fazel -
2021 : Differentially Private Monotone Submodular Maximization Under Matroid and Knapsack Constraints »
Maryam Fazel · Omid Sadeghi -
2021 : Improved Regret Bounds for Online Submodular Maximization »
Maryam Fazel · Omid Sadeghi -
2018 Poster: Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator »
Maryam Fazel · Rong Ge · Sham Kakade · Mehran Mesbahi -
2018 Oral: Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator »
Maryam Fazel · Rong Ge · Sham Kakade · Mehran Mesbahi