Timezone: »
Sybil detection is a crucial task to protect online social networks (OSNs) against intruders who try to manipulate automatic services provided by OSNs to their customers. In this paper, we first discuss the robustness of graph-based Sybil detectors SybilRank and Integro and refine theoretically their security guarantees towards more realistic assumptions. After that, we formally introduce adversarial settings for the graph-based Sybil detection problem and derive a corresponding optimal attacking strategy by exploitation of trust leaks. Based on our analysis, we propose transductive Sybil ranking (TSR), a robust extension to SybilRank and Integro that directly minimizes trust leaks. Our empirical evaluation shows significant advantages of TSR over state-of-the-art competitors on a variety of attacking scenarios on artificially generated data and real-world datasets.
Author Information
János Höner (TU Berlin / MathPlan)
Shinichi Nakajima (TU Berlin)
Alexander Bauer (TU Berlin)
Klaus-robert Mueller (Technische Universität Berlin)
Nico Görnitz (TU Berlin)
After an internship with the eScience Group, led by David Heckerman (Microsoft Research, Los Angeles, US) in 2014, Nico received a scholarship and is currently enrolled as a research associate in the machine learning group at the Berlin Institute of Technology (TU Berlin, Berlin, Germany) headed by Klaus-Robert Müller. Before, Nico was employed as a research associate from 2010-2014 and during 2010-2012 also affiliated with the Friedrich Miescher Laboratory of the Max Planck Society in Tübingen, where he was co-advised by Gunnar Rätsch. He received a diploma degree (MSc equivalent) in computer engineering (Technische Informatik) from the Berlin Institute of Technology with a thesis in machine learning for computer security in 2010.
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Minimizing Trust Leaks for Robust Sybil Detection »
Wed. Aug 9th 01:42 -- 02:00 AM Room C4.8
More from the Same Authors
-
2023 Poster: Relevant Walk Search for Explaining Graph Neural Networks »
Ping Xiong · Thomas Schnake · Michael Gastegger · Grégoire Montavon · Klaus-robert Mueller · Shinichi Nakajima -
2022 Poster: Efficient Computation of Higher-Order Subgraph Attribution via Message Passing »
Ping Xiong · Thomas Schnake · Grégoire Montavon · Klaus-robert Mueller · Shinichi Nakajima -
2022 Poster: XAI for Transformers: Better Explanations through Conservative Propagation »
Ameen Ali · Thomas Schnake · Oliver Eberle · Grégoire Montavon · Klaus-robert Mueller · Lior Wolf -
2022 Spotlight: Efficient Computation of Higher-Order Subgraph Attribution via Message Passing »
Ping Xiong · Thomas Schnake · Grégoire Montavon · Klaus-robert Mueller · Shinichi Nakajima -
2022 Spotlight: XAI for Transformers: Better Explanations through Conservative Propagation »
Ameen Ali · Thomas Schnake · Oliver Eberle · Grégoire Montavon · Klaus-robert Mueller · Lior Wolf -
2021 : [12:52 - 01:45 PM UTC] Invited Talk 2: Toward Explainable AI »
Klaus-robert Mueller · Wojciech Samek · Grégoire Montavon -
2020 Workshop: XXAI: Extending Explainable AI Beyond Deep Models and Classifiers »
Wojciech Samek · Andreas HOLZINGER · Ruth Fong · Taesup Moon · Klaus-robert Mueller -
2020 Poster: Fairwashing explanations with off-manifold detergent »
Christopher Anders · Plamen Pasliev · Ann-Kathrin Dombrowski · Klaus-robert Mueller · Pan Kessel -
2018 Poster: Deep One-Class Classification »
Lukas Ruff · Nico Görnitz · Lucas Deecke · Shoaib Ahmed Siddiqui · Robert Vandermeulen · Alexander Binder · Emmanuel Müller · Marius Kloft -
2018 Oral: Deep One-Class Classification »
Lukas Ruff · Nico Görnitz · Lucas Deecke · Shoaib Ahmed Siddiqui · Robert Vandermeulen · Alexander Binder · Emmanuel Müller · Marius Kloft