Continual Learning With Participation Privacy: An Auditable Buffering-Aggregation Recipe
Hubert Chan ⋅ Elaine Shi ⋅ Mengshi Zhao ⋅ Mingxun Zhou
Abstract
Modern federated and streaming learning systems often release intermediate models, so privacy must hold for the full trajectory under adaptive interaction. Motivated by participation privacy, we study single-edit neighboring user streams, where one insertion/deletion shifts all subsequent updates and defeats standard Hamming-neighbor continual-release analyses. We give an auditable modular recipe. A randomized buffering wrapper emits bins of size $[U,2U]$, reducing single-edit streams to a Hamming-style per-bin update stream with explicit backlog/delay guarantees, where $U$ is calibrated by the privacy parameters $(\varepsilon,\delta)$. We then prove a certification theorem for independently decomposable (prefix-causal, fresh-noise) continual mechanisms: any non-adaptive Hamming-neighbor DP proof lifts to adaptive inputs. Together, these ingredients yield trajectory-level $(\varepsilon,\delta)$-DP for single-edit streams using standard primitives (e.g., tree prefix sums), with an explicit privacy--latency link via $U$. Streaming DP-SGD experiments validate the privacy-utility-latency tradeoffs and the induced delay distributions.
Successful Page Load