Skip to yearly menu bar Skip to main content


Invited Talk 1
in
Workshop: Theory and Practice of Differential Privacy

Privacy as Stability, for Generalization

Katrina Ligett


Abstract:

Sequences of adaptively chosen queries issued against a fixed dataset are at the core of machine learning and data-driven sciences, but adaptivity can quickly lead to overfitting. In recent years, differential privacy---interpreted as a notion of stability---has emerged as a tool for (theoretically) protecting against such adaptivity by preventing query answers from encoding too much information about the dataset. In this talk, we'll explore how differential privacy achieves this, and begin to examine whether differential privacy is overkill for protecting against adaptivity.