Skip to yearly menu bar Skip to main content


Poster

Near-optimal rate of consistency for linear models with missing values

Alexis Ayme · Claire Boyer · Aymeric Dieuleveut · Erwan Scornet

Hall E #505

Keywords: [ T: Learning Theory ] [ MISC: General Machine Learning Techniques ] [ MISC: Supervised Learning ] [ Miscellaneous Aspects of Machine Learning ]


Abstract:

Missing values arise in most real-world data sets due to the aggregation of multiple sources and intrinsically missing information (sensor failure, unanswered questions in surveys...). In fact, the very nature of missing values usually prevents us from running standard learning algorithms. In this paper, we focus on the extensively-studied linear models, but in presence of missing values, which turns out to be quite a challenging task. Indeed, the Bayes predictor can be decomposed as a sum of predictors corresponding to each missing pattern. This eventually requires to solve a number of learning tasks, exponential in the number of input features, which makes predictions impossible for current real-world datasets. First, we propose a rigorous setting to analyze a least-square type estimator and establish a bound on the excess risk which increases exponentially in the dimension. Consequently, we leverage the missing data distribution to propose a new algorithm, and derive associated adaptive risk bounds that turn out to be minimax optimal. Numerical experiments highlight the benefits of our method compared to state-of-the-art algorithms used for predictions with missing values.

Chat is not available.