Timezone: »
While mixture of linear regressions (MLR) is a well-studied topic, prior works usually do not analyze such models for prediction error. In fact, \emph{prediction} and \emph{loss} are not well-defined in the context of mixtures. In this paper, first we show that MLR can be used for prediction where instead of predicting a label, the model predicts a list of values (also known as \emph{list-decoding}). The list size is equal to the number of components in the mixture, and the loss function is defined to be minimum among the losses resulted by all the component models. We show that with this definition, a solution of the empirical risk minimization (ERM) achieves small probability of prediction error. This begs for an algorithm to minimize the empirical risk for MLR, which is known to be computationally hard. Prior algorithmic works in MLR focus on the \emph{realizable} setting, i.e., recovery of parameters when data is probabilistically generated by a mixed linear (noisy) model. In this paper we show that a version of the popular expectation minimization (EM) algorithm finds out the best fit lines in a dataset even when a realizable model is not assumed, under some regularity conditions on the dataset and the initial points, and thereby provides a solution for the ERM. We further provide an algorithm that runs in polynomial time in the number of datapoints, and recovers a good approximation of the best fit lines. The two algorithms are experimentally compared.
Author Information
Soumyabrata Pal (Umass Amherst)
I am a fourth-year Ph.D. student in the Computer Science Department (CICS) at the University of Massachusetts Amherst advised by Dr. Arya Mazumdar. Currently, I am a visiting scholar at the University of California, Berkeley. Prior to this I graduated from Indian Institute of Technology, Kharagpur in August 2016 with a Bachelor's degree in Electronics and Electrical Communication Engineering. My research interests are statistical machine learning, information theory and coding theory. More concisely, I love statistical recovery problems under different settings. Under this big umbrella, I have worked on problems related to random graph theory, high dimensional integration, semi-supervised clustering, trace reconstruction and parameter recovery in mixtures of standard distributions.
Arya Mazumdar (University of California, San Diego)
Rajat Sen (Google Research)
I am a 4th year PhD. student in WNCG, UT Austin. I am advised by [Dr. Sanjay Shakkottai](http://users.ece.utexas.edu/~shakkott/Sanjay_Shakkottai/Contact.html). I received my Bachelors degree in ECE, IIT Kharagpur in 2013. I have spent most of my childhood in Durgapur and Kolkata, West Bengal, India. My research interests include online learning (especially Multi-Armed Bandit problems), causality, learning in queueing systems, recommendation systems and social networks. I like to work on real-world problems that allow rigorous theoretical analysis.
Avishek Ghosh (University of California, San Diego)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: On Learning Mixture of Linear Regressions in the Non-Realizable Setting »
Wed. Jul 20th through Thu the 21st Room Hall E #1315
More from the Same Authors
-
2022 Poster: Breaking the $\sqrt{T}$ Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits »
Avishek Ghosh · Abishek Sankararaman -
2022 Spotlight: Breaking the $\sqrt{T}$ Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits »
Avishek Ghosh · Abishek Sankararaman -
2021 Poster: Top-k eXtreme Contextual Bandits with Arm Hierarchy »
Rajat Sen · Alexander Rakhlin · Lexing Ying · Rahul Kidambi · Dean Foster · Daniel Hill · Inderjit Dhillon -
2021 Spotlight: Top-k eXtreme Contextual Bandits with Arm Hierarchy »
Rajat Sen · Alexander Rakhlin · Lexing Ying · Rahul Kidambi · Dean Foster · Daniel Hill · Inderjit Dhillon -
2020 Poster: Recovery of Sparse Signals from a Mixture of Linear Samples »
Soumyabrata Pal · Arya Mazumdar -
2018 Poster: Multi-Fidelity Black-Box Optimization with Hierarchical Partitions »
Rajat Sen · kirthevasan kandasamy · Sanjay Shakkottai -
2018 Oral: Multi-Fidelity Black-Box Optimization with Hierarchical Partitions »
Rajat Sen · kirthevasan kandasamy · Sanjay Shakkottai -
2017 Poster: Identifying Best Interventions through Online Importance Sampling »
Rajat Sen · Karthikeyan Shanmugam · Alexandros Dimakis · Sanjay Shakkottai -
2017 Talk: Identifying Best Interventions through Online Importance Sampling »
Rajat Sen · Karthikeyan Shanmugam · Alexandros Dimakis · Sanjay Shakkottai