Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Input-Sparsity Low Rank Approximation in Schatten Norm

Yi Li · David Woodruff

Keywords: [ Dimensionality Reduction ] [ Non-convex Optimization ] [ Matrix/Tensor Methods ] [ Unsupervised and Semi-supervised Learning ]


Abstract: We give the first input-sparsity time algorithms for the rank-k low rank approximation problem in every Schatten norm. Specifically, for a given n×n matrix A, our algorithm computes Y,Z\Rn×k, which, with high probability, satisfy AYZTp(1+\eps)AAkp, where Mp=(ni=1σi(M)p)1/p is the Schatten p-norm of a matrix M with singular values σ1(M),,σn(M), and where Ak is the best rank-k approximation to A. Our algorithm runs in time ˜O(\nnz(A)+nαp\poly(k/\eps)), where αp=1 for p[1,2) and αp=1+(ω1)(12/p) for p>2 and ω2.374 is the exponent of matrix multiplication. For the important case of p=1, which corresponds to the more robust'' nuclear norm, we obtain ˜O(\nnz(A)+n\poly(k/ϵ)) time, which was previously only known for the Frobenius norm (p=2). Moreover, since αp<ω for every p, our algorithm has a better dependence on n than that in the singular value decomposition for every p. Crucial to our analysis is the use of dimensionality reduction for Ky-Fan p-norms.

Chat is not available.