Skip to yearly menu bar Skip to main content


Learning and Data Selection in Big Datasets

Hossein Shokri Ghadikolaei · Hadi Ghauch · Inst. of Technology Carlo Fischione · Mikael Skoglund

Pacific Ballroom #170

Keywords: [ Supervised Learning ] [ Other Applications ] [ Non-convex Optimization ] [ Information Theory and Estimation ] [ Active Learning ]


Finding a dataset of minimal cardinality to characterize the optimal parameters of a model is of paramount importance in machine learning and distributed optimization over a network. This paper investigates the compressibility of large datasets. More specifically, we propose a framework that jointly learns the input-output mapping as well as the most representative samples of the dataset (sufficient dataset). Our analytical results show that the cardinality of the sufficient dataset increases sub-linearly with respect to the original dataset size. Numerical evaluations of real datasets reveal a large compressibility, up to 95%, without a noticeable drop in the learnability performance, measured by the generalization error.

Live content is unavailable. Log in and register to view live content