MTNL: A Unified Modeling Perspective for Enhancing Tensor Network Learning
Abstract
Over the years, the unsupervised and supervised learning research directions of tensor networks (TNs) have mainly developed in parallel. In this paper, we provide a view for their cooperative advancement through a novel mixed tensor network learning (MTNL) framework that unifies the two fields. Specifically, inspired by supervised TN learning tasks, multiple TNs are fused in a deep-network style in MTNL, enhancing the expressive power for the unsupervised TN learning tasks. We then develop a more flexible TN structure search prior with theoretical guarantees for learning multiple TN structures, aligning with trends in many supervised learning setups. More interestingly, by combining these components within a Bayesian framework, we show that MTNL induces a lightweight uncertainty quantification mechanism that is theoretically guaranteed by its connection to the dropout-based counterpart problem, making the mechanism a potential alternative for large-scale learning problems. Finally, we demonstrate the effectiveness of the MTNL framework on tensor recovery, parameter-efficient fine-tuning, and tensor regression experiments.