Timezone: »

Adversarial Nonnegative Matrix Factorization
lei luo · yanfu Zhang · Heng Huang

Wed Jul 15 12:00 PM -- 12:45 PM & Wed Jul 15 11:00 PM -- 11:45 PM (PDT) @ Virtual #None

Nonnegative Matrix Factorization (NMF) has become an increasingly important research topic in machine learning. Despite all the practical success, most of existing NMF models are still vulnerable to adversarial attacks. To overcome this limitation, we propose a novel Adversarial NMF (ANMF) approach in which an adversary can exercise some control over the perturbed data generation process. Different from the traditional NMF models which focus on either the regular input or certain types of noise, our model considers potential test adversaries that are beyond the pre-defined constraints, which can cope with various noises (or perturbations). We formulate the proposed model as a bilevel optimization problem and use Alternating Direction Method of Multipliers (ADMM) to solve it with convergence analysis. Theoretically, the robustness analysis of ANMF is established under mild conditions dedicating asymptotically unbiased prediction. Extensive experiments verify that ANMF is robust to a broad categories of perturbations, and achieves state-of-the-art performances on distinct real-world benchmark datasets.

Author Information

lei luo (JD)
yanfu Zhang (University of Pittsburgh)
Heng Huang (University of Pittsburgh & JD Finance America Corporation)

More from the Same Authors