Skip to yearly menu bar Skip to main content


Poster

Do Outliers Ruin Collaboration?

Mingda Qiao

Hall B #23

Abstract: We consider the problem of learning a binary classifier from $n$ different data sources, among which at most an $\eta$ fraction are adversarial. The overhead is defined as the ratio between the sample complexity of learning in this setting and that of learning the same hypothesis class on a single data distribution. We present an algorithm that achieves an $O(\eta n + \ln n)$ overhead, which is proved to be worst-case optimal. We also discuss the potential challenges to the design of a computationally efficient learning algorithm with a small overhead.

Live content is unavailable. Log in and register to view live content