Skip to yearly menu bar Skip to main content


Poster
in
Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning

Adversarial Robustness of Streaming Algorithms through Importance Sampling

Vladimir Braverman · Avinatan Hasidim · Yossi Matias · Mariano Schain · Sandeep Silwal · Samson Zhou


Abstract:

In the adversarial streaming model, an adversary gives an algorithm a sequence of adaptively chosen updates as a data stream and the goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream. However, the adversary may generate future updates based on previous outputs of the algorithm and in particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. In this paper, we introduce adversarially robust streaming algorithms for central machine learning and algorithmic tasks, such as regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. Our results are based on a simple, but powerful, observation that many importance sampling-based algorithms give rise to adversarial robustness in contrast to sketching based algorithms, which are very prevalent in the streaming literature but suffer from adversarial attacks. In addition, we show that the well-known merge and reduce paradigm used for corset construction in streaming is adversarially robust. To the best of our knowledge, these are the first adversarially robust results for these problems yet require no new algorithmic implementations. Finally, we empirically confirm the robustness of our algorithms on various adversarial attacks and demonstrate that by contrast, some common existing algorithms are not robust.