Abstract:
We initiate a systematic study of worst-group risk minimization under (ϵ,δ)-differential privacy (DP). The goal is to privately find a model that approximately minimizes the maximal risk across p sub-populations (groups) with different distributions, where each group distribution is accessed via a sample oracle. We first present a new algorithm that achieves excess worst-group population risk of ~O(p√dKϵ+√pK), where K is the total number of samples drawn from all groups and d is the problem dimension. Our rate is nearly optimal when each distribution is observed via a fixed-size dataset of size K/p. Our result is based on a new stability-based analysis for the generalization error. In particular, we show that Δ-uniform argument stability implies ~O(Δ+1√n) generalization error w.r.t. the worst-group risk, where n is the number of samples drawn from each sample oracle. Next, we propose an algorithmic framework for worst-group population risk minimization using any DP online convex optimization algorithm as a subroutine. Hence, we give another excess risk bound of ~O(√d1/2ϵK+√pKϵ2+√pK). Assuming the typical setting of ϵ=Θ(1), this bound is more favorable than our first bound in a certain range of p as a function of K and d. Finally, we study differentially private worst-group *empirical* risk minimization in the offline setting, where each group distribution is observed by a fixed-size dataset. We present a new algorithm with nearly optimal excess risk of ~O(p√dKϵ).
Chat is not available.