Skip to yearly menu bar Skip to main content


Poster

Adversarially Robust Deep Multi-View Clustering: A Novel Attack and Defense Framework

Haonan Huang · Guoxu Zhou · Yanghang Zheng · Yuning Qiu · Andong Wang · Qibin Zhao


Abstract:

Deep Multi-view Clustering (DMVC) stands out as a widely adopted technique aiming at enhanced clustering performance by leveraging diverse data sources.However, the critical issue of vulnerability to adversarial attacks is unexplored due to the lack of well-defined attack objectives.To fill this crucial gap, this paper is the first work to investigate the possibility of adversarial attacks on DMVC models.Specifically, we introduce an adversarial attack with Generative Adversarial Networks (GANs) with the aim to maximally change the complementarity and consistency of multiple views, thus leading to wrong clustering. Building upon this adversarial context, in the realm of defense, we propose a novel Adversarially Robust Deep Multi-View Clustering by leveraging adversarial training.Based on the analysis from an information-theoretic perspective, we design an Attack Mitigator that provides a foundation to guarantee the adversarial robustness of our DMVC models.Experiments conducted on multi-view datasets confirmed that our attack framework effectively reduces the clustering performance of the target model. Furthermore, our proposed adversarially robust method is also demonstrated to be an effective defense of such attacks.This work is a pioneer in exploring adversarial threats and advancing both theoretical understanding and practical strategies for robust multi-view clustering.

Live content is unavailable. Log in and register to view live content