Refining Dual Spectral Sparsity in Transformed Tensor Singular Values
Andong Wang ⋅ Yuning Qiu ⋅ Haonan Huang ⋅ Zhong Jin ⋅ Guoxu Zhou ⋅ Qibin Zhao
Abstract
The Tensor Nuclear Norm (TNN), derived from the tensor singular value decomposition, is a widely used low-rank modeling tool that enforces element-wise sparsity on frequency-domain singular values. However, as a direct extension of the matrix nuclear norm, TNN fundamentally assumes single-level spectral sparsity, which is misaligned with the multi-level spectral structures prevalent in real-world data, where low-rankness within frequency components coexists with sparsity across them. To overcome this limitation, we propose the tensor $\ell_p$-Schatten-$q$ quasi-norm ($p,q\in(0,1]$), which enables explicit control of dual spectral sparsity by jointly regularizing inter-frequency sparsity and intra-frequency low-rankness. This formulation strictly generalizes TNN and subsumes several existing tensor regularizers by coupling global frequency sparsity with local spectral low-rankness, leading to a fundamentally different modeling principle. We establish the first minimax error bounds under this model and develop an efficient reweighted optimization algorithm for the resulting nonconvex problem. Numerical experiments on noisy and Poisson tensor completion as well as image clustering demonstrate the effectiveness and robustness of our method across reconstruction and representation learning tasks involving complex multi-way data.
Successful Page Load