Skip to yearly menu bar Skip to main content


Poster

Improved Differentially Private and Lazy Online Convex Optimization: Lower Regret without Smoothness Requirements

Naman Agarwal · Satyen Kale · Karan Singh · Abhradeep Guha Thakurta

Hall C 4-9 #2401
[ ]
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

We design differentially private regret-minimizing algorithms in the online convex optimization (OCO) framework. Unlike recent results, our algorithms and analyses do not require smoothness, thus yielding the first private regret bounds with an optimal leading-order term for non-smooth loss functions. Additionally, even for smooth losses, the resulting regret guarantees improve upon previous results in terms their dependence of dimension. Our results provide the best known rates for DP-OCO in all practical regimes of the privacy parameter, barring when it is exceptionally small. The principal innovation in our algorithm design is the use of sampling from strongly log-concave densities which satisfy the Log-Sobolev Inequality. The resulting concentration of measure allows us to obtain a better trade-off for the dimension factors than prior work, leading to improved results. Following previous works on DP-OCO, the proposed algorithm explicitly limits the number of switches via rejection sampling. Thus, independently of privacy constraints, the algorithm also provides improved results for online convex optimization with a switching budget.

Live content is unavailable. Log in and register to view live content