Timezone: »
Poster
Meta Learning for Support Recovery in Highdimensional Precision Matrix Estimation
Qian Zhang · Yilin Zheng · Jean Honorio
In this paper, we study meta learning for support (i.e., the set of nonzero entries) recovery in highdimensional precision matrix estimation where we reduce the sufficient sample complexity in a novel task with the information learned from other auxiliary tasks. In our setup, each task has a different random true precision matrix, each with a possibly different support. We assume that the union of the supports of all the true precision matrices (i.e., the true support union) is small in size. We propose to pool all the samples from different tasks, and \emph{improperly} estimate a single precision matrix by minimizing the $\ell_1$regularized logdeterminant Bregman divergence. We show that with high probability, the support of the \emph{improperly} estimated single precision matrix is equal to the true support union, provided a sufficient number of samples per task $n \in O((\log N)/K)$, for $N$dimensional vectors and $K$ tasks. That is, one requires less samples per task when more tasks are available. We prove a matching informationtheoretic lower bound for the necessary number of samples, which is $n \in \Omega((\log N)/K)$, and thus, our algorithm is minimax optimal. Then for the novel task, we prove that the minimization of the $\ell_1$regularized logdeterminant Bregman divergence with the additional constraint that the support is a subset of the estimated support union could reduce the sufficient sample complexity of successful support recovery to $O(\log(S_{\text{off}}))$ where $S_{\text{off}}$ is the number of offdiagonal elements in the support union and is much less than $N$ for sparse matrices. We also prove a matching informationtheoretic lower bound of $\Omega(\log(S_{\text{off}}))$ for the necessary number of samples.
Author Information
Qian Zhang (Purdue University)
Yilin Zheng ()
Jean Honorio (Purdue University)
Related Events (a corresponding poster, oral, or spotlight)

2021 Spotlight: Meta Learning for Support Recovery in Highdimensional Precision Matrix Estimation »
Wed Jul 21st 12:25  12:30 PM Room None
More from the Same Authors

2021 Poster: A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning »
Abi Komanduru · Jean Honorio 
2021 Spotlight: A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning »
Abi Komanduru · Jean Honorio 
2019 Poster: Optimality Implies Kernel Sum Classifiers are Statistically Efficient »
Raphael Meyer · Jean Honorio 
2019 Oral: Optimality Implies Kernel Sum Classifiers are Statistically Efficient »
Raphael Meyer · Jean Honorio 
2018 Poster: Learning MaximumAPosteriori Perturbation Models for Structured Prediction in Polynomial Time »
Asish Ghoshal · Jean Honorio 
2018 Oral: Learning MaximumAPosteriori Perturbation Models for Structured Prediction in Polynomial Time »
Asish Ghoshal · Jean Honorio