Skip to yearly menu bar Skip to main content


Poster

Improved Dynamic Graph Learning through Fault-Tolerant Sparsification

Chunjiang Zhu · Sabine Storandt · Kam-Yiu Lam · Song Han · Jinbo Bi

Pacific Ballroom #138

Keywords: [ Supervised Learning ] [ Semi-supervised learning ] [ Online Learning ] [ Networks and Relational Learning ] [ Large Scale Learning and Big Data ]


Abstract:

Graph sparsification has been used to improve the computational cost of learning over graphs, e.g., Laplacian-regularized estimation and graph semi-supervised learning (SSL). However, when graphs vary over time, repeated sparsification requires polynomial order computational cost per update. We propose a new type of graph sparsification namely fault-tolerant (FT) sparsification to significantly reduce the cost to only a constant. Then the computational cost of subsequent graph learning tasks can be significantly improved with limited loss in their accuracy. In particular, we give theoretical analyze to upper bound the loss in the accuracy of the subsequent Laplacian-regularized estimation and graph SSL, due to the FT sparsification. In addition, FT spectral sparsification can be generalized to FT cut sparsification, for cut-based graph learning. Extensive experiments have confirmed the computational efficiencies and accuracies of the proposed methods for learning on dynamic graphs.

Live content is unavailable. Log in and register to view live content