Timezone: »
In this paper, we study the problem of sparse mixed linear regression on an unlabeled dataset that is generated from linear measurements from two different regression parameter vectors. Since the data is unlabeled, our task is to not only figure out a good approximation of regression parameter vectors but also label the dataset correctly. In its original form, this problem is NP-hard. The most popular algorithms to solve this problem (such as Expectation-Maximization) have a tendency to stuck at local minima. We provide a novel invex relaxation for this intractable problem which leads to a solution with provable theoretical guarantees. This relaxation enables exact recovery of data labels. Furthermore, we recover close approximation of regression parameter vectors which match the true parameter vectors in support and sign. Our formulation uses a carefully constructed primal dual witnesses framework for the invex problem. Furthermore, we show that the sample complexity of our method is only logarithmic in terms of the dimension of the regression parameter vectors.
Author Information
Adarsh Barik (Purdue University)
Jean Honorio (Purdue University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Sparse Mixed Linear Regression with Guarantees: Taming an Intractable Problem with Invex Relaxation »
Tue. Jul 19th through Wed the 20th Room Hall E #1212
More from the Same Authors
-
2023 Poster: Exact Inference in High-order Structured Prediction »
Chuyang Ke · Jean Honorio -
2022 Poster: A Simple Unified Framework for High Dimensional Bandit Problems »
Wenjie Li · Adarsh Barik · Jean Honorio -
2022 Spotlight: A Simple Unified Framework for High Dimensional Bandit Problems »
Wenjie Li · Adarsh Barik · Jean Honorio -
2021 Poster: Meta Learning for Support Recovery in High-dimensional Precision Matrix Estimation »
Qian Zhang · Yilin Zheng · Jean Honorio -
2021 Poster: A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning »
Abi Komanduru · Jean Honorio -
2021 Spotlight: A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning »
Abi Komanduru · Jean Honorio -
2021 Spotlight: Meta Learning for Support Recovery in High-dimensional Precision Matrix Estimation »
Qian Zhang · Yilin Zheng · Jean Honorio -
2019 Poster: Optimality Implies Kernel Sum Classifiers are Statistically Efficient »
Raphael Meyer · Jean Honorio -
2019 Oral: Optimality Implies Kernel Sum Classifiers are Statistically Efficient »
Raphael Meyer · Jean Honorio -
2018 Poster: Learning Maximum-A-Posteriori Perturbation Models for Structured Prediction in Polynomial Time »
Asish Ghoshal · Jean Honorio -
2018 Oral: Learning Maximum-A-Posteriori Perturbation Models for Structured Prediction in Polynomial Time »
Asish Ghoshal · Jean Honorio