Timezone: »
Graph sparsification has been used to improve the computational cost of learning over graphs, \e.g., Laplacian-regularized estimation and graph semi-supervised learning (SSL). However, when graphs vary over time, repeated sparsification requires polynomial order computational cost per update. We propose a new type of graph sparsification namely fault-tolerant (FT) sparsification to significantly reduce the cost to only a constant. Then the computational cost of subsequent graph learning tasks can be significantly improved with limited loss in their accuracy. In particular, we give theoretical analyze to upper bound the loss in the accuracy of the subsequent Laplacian-regularized estimation and graph SSL, due to the FT sparsification. In addition, FT spectral sparsification can be generalized to FT cut sparsification, for cut-based graph learning. Extensive experiments have confirmed the computational efficiencies and accuracies of the proposed methods for learning on dynamic graphs.
Author Information
Chunjiang Zhu (University of Connecticut)
Sabine Storandt (University of Wuerzburg)
Kam-Yiu Lam
Song Han
Jinbo Bi (University of Connecticut)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Improved Dynamic Graph Learning through Fault-Tolerant Sparsification »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #138
More from the Same Authors
-
2021 Poster: Spectral vertex sparsifiers and pair-wise spanners over distributed graphs »
Chunjiang Zhu · Qinqing Liu · Jinbo Bi -
2021 Spotlight: Spectral vertex sparsifiers and pair-wise spanners over distributed graphs »
Chunjiang Zhu · Qinqing Liu · Jinbo Bi