Skip to yearly menu bar Skip to main content


Poster

Schatten Norms in Matrix Streams: Hello Sparsity, Goodbye Dimension

Vladimir Braverman · Robert Krauthgamer · Aditya Krishnan · Roi Sinoff

Keywords: [ Optimization - Large Scale, Parallel and Distributed ] [ Matrix/Tensor Methods ] [ Sparsity and Compressed Sensing ] [ Large Scale Learning and Big Data ]


Abstract:

Spectral functions of large matrices contains important structural information about the underlying data, and is thus becoming increasingly important. Many times, large matrices representing real-world data are sparse or doubly sparse (i.e., sparse in both rows and columns), and are accessed as a stream of updates, typically organized in row-order. In this setting, where space (memory) is the limiting resource, all known algorithms require space that is polynomial in the dimension of the matrix, even for sparse matrices. We address this challenge by providing the first algorithms whose space requirement is independent of the matrix dimension, assuming the matrix is doubly-sparse and presented in row-order. Our algorithms approximate the Schatten p-norms, which we use in turn to approximate other spectral functions, such as logarithm of the determinant, trace of matrix inverse, and Estrada index. We validate these theoretical performance bounds by numerical experiments on real-world matrices representing social networks. We further prove that multiple passes are unavoidable in this setting, and show extensions of our primary technique, including a trade-off between space requirements and number of passes.

Chat is not available.