Skip to yearly menu bar Skip to main content


Poster

Data Poisoning Attacks in Multi-Party Learning

Saeed Mahloujifar · Mohammad Mahmoody · Ameer Mohammed

Keywords: [ Computational Learning Theory ] [ Robust Statistics and Machine Learning ] [ Theory and Algorithms ]

[ ]
2019 Poster

Abstract: In this work, we demonstrate universal multi-party poisoning attacks that adapt and apply to any multi-party learning process with arbitrary interaction pattern between the parties. More generally, we introduce and study $(k,p)$-poisoning attacks in which an adversary controls $k\in[m]$ of the parties, and for each corrupted party $P_i$, the adversary submits some poisoned data $T'_i$ on behalf of $P_i$ that is still "$(1-p)$-close" to the correct data $T_i$ (e.g., $1-p$ fraction of $T'_i$ is still honestly generated).We prove that for any "bad" property $B$ of the final trained hypothesis $h$ (e.g., $h$ failing on a particular test example or having "large" risk) that has an arbitrarily small constant probability of happening without the attack, there always is a $(k,p)$-poisoning attack that increases the probability of $B$ from $\mu$ to by $\mu^{1-p \cdot k/m} = \mu + \Omega(p \cdot k/m)$. Our attack only uses clean labels, and it is online, as it only knows the the data shared so far.

Chat is not available.