Timezone: »
Feature selection is an important challenge in machine learning. It plays a crucial role in the explainability of machine-driven decisions that are rapidly permeating throughout modern society. Unfortunately, the explosion in the size and dimensionality of real-world datasets poses a severe challenge to standard feature selection algorithms. Today, it is not uncommon for datasets to have billions of dimensions. At such scale, even storing the feature vector is impossible, causing most existing feature selection methods to fail. Workarounds like feature hashing, a standard approach to large-scale machine learning, helps with the computational feasibility, but at the cost of losing the interpretability of features. In this paper, we present MISSION, a novel framework for ultra large-scale feature selection that performs stochastic gradient descent while maintaining an efficient representation of the features in memory using a Count-Sketch data structure. MISSION retains the simplicity of feature hashing without sacrificing the interpretability of the features while using only O(log^2(p)) working memory. We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.
Author Information
Amirali Aghazadeh (Stanford University)
Ryan Spring (Rice University)
Daniel LeJeune (Rice University)
I'm a Ph.D. student working under Richard Baraniuk in the DSP Group at Rice University. I'm interested in developing algorithms for solving machine learning and optimization problems.
Gautam Dasarathy (Rice University)
Anshumali Shrivastava (Rice University)
Anshumali Shrivastava is an associate professor in the computer science department at Rice University. His broad research interests include randomized algorithms for large-scale machine learning. In 2018, Science news named him one of the Top-10 scientists under 40 to watch. He is a recipient of National Science Foundation CAREER Award, a Young Investigator Award from Air Force Office of Scientific Research, and machine learning research award from Amazon. His research on hashing inner products has won Best Paper Award at NIPS 2014 while his work on representing graphs got the Best Paper Award at IEEE/ACM ASONAM 2014. Anshumali finished his Ph.D. in 2015 from Cornell University.
Richard Baraniuk (OpenStax / Rice University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Ultra Large-Scale Feature Selection using Count-Sketches »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #27
More from the Same Authors
-
2022 : Monotonic Risk Relationships under Distribution Shifts for Regularized Risk Minimization »
Daniel LeJeune · Jiayu Liu · Reinhard Heckel -
2023 Poster: Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time »
Zichang Liu · Jue Wang · Tri Dao · Tianyi Zhou · Binhang Yuan · Zhao Song · Anshumali Shrivastava · Ce Zhang · Yuandong Tian · Christopher Re · Beidi Chen -
2023 Poster: Hardware-Aware Compression with Random Operation Access Specific Tile (ROAST) Hashing »
Aditya P. Desai · Keren Zhou · Anshumali Shrivastava -
2023 Oral: Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time »
Zichang Liu · Jue Wang · Tri Dao · Tianyi Zhou · Binhang Yuan · Zhao Song · Anshumali Shrivastava · Ce Zhang · Yuandong Tian · Christopher Re · Beidi Chen -
2022 Poster: Improving Transformers with Probabilistic Attention Keys »
Tam Nguyen · Tan Nguyen · Dung Le · Duy Khuong Nguyen · Viet-Anh Tran · Richard Baraniuk · Nhat Ho · Stanley Osher -
2022 Spotlight: Improving Transformers with Probabilistic Attention Keys »
Tam Nguyen · Tan Nguyen · Dung Le · Duy Khuong Nguyen · Viet-Anh Tran · Richard Baraniuk · Nhat Ho · Stanley Osher -
2022 Poster: One-Pass Diversified Sampling with Application to Terabyte-Scale Genomic Sequence Streams »
Benjamin Coleman · Benito Geordie · Li Chou · R. A. Leo Elworth · Todd Treangen · Anshumali Shrivastava -
2022 Spotlight: One-Pass Diversified Sampling with Application to Terabyte-Scale Genomic Sequence Streams »
Benjamin Coleman · Benito Geordie · Li Chou · R. A. Leo Elworth · Todd Treangen · Anshumali Shrivastava -
2022 Poster: DRAGONN: Distributed Randomized Approximate Gradients of Neural Networks »
Zhuang Wang · Zhaozhuo Xu · Xinyu Wu · Anshumali Shrivastava · T. S. Eugene Ng -
2022 Spotlight: DRAGONN: Distributed Randomized Approximate Gradients of Neural Networks »
Zhuang Wang · Zhaozhuo Xu · Xinyu Wu · Anshumali Shrivastava · T. S. Eugene Ng -
2021 Poster: A Tale of Two Efficient and Informative Negative Sampling Distributions »
Shabnam Daghaghi · Tharun Medini · Nicholas Meisburger · Beidi Chen · Mengnan Zhao · Anshumali Shrivastava -
2021 Oral: A Tale of Two Efficient and Informative Negative Sampling Distributions »
Shabnam Daghaghi · Tharun Medini · Nicholas Meisburger · Beidi Chen · Mengnan Zhao · Anshumali Shrivastava -
2020 Poster: Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization Errors »
Yehuda Dar · Paul Mayer · Lorenzo Luzi · Richard Baraniuk -
2020 Poster: Sub-linear Memory Sketches for Near Neighbor Search on Streaming Data »
Benjamin Coleman · Richard Baraniuk · Anshumali Shrivastava -
2020 Poster: Angular Visual Hardness »
Beidi Chen · Weiyang Liu · Zhiding Yu · Jan Kautz · Anshumali Shrivastava · Animesh Garg · Anima Anandkumar -
2019 Poster: Compressing Gradient Optimizers via Count-Sketches »
Ryan Spring · Anastasios Kyrillidis · Vijai Mohan · Anshumali Shrivastava -
2019 Oral: Compressing Gradient Optimizers via Count-Sketches »
Ryan Spring · Anastasios Kyrillidis · Vijai Mohan · Anshumali Shrivastava -
2018 Poster: A Spline Theory of Deep Learning »
Randall Balestriero · Richard Baraniuk -
2018 Poster: prDeep: Robust Phase Retrieval with a Flexible Deep Network »
Christopher Metzler · Phillip Schniter · Ashok Veeraraghavan · Richard Baraniuk -
2018 Oral: prDeep: Robust Phase Retrieval with a Flexible Deep Network »
Christopher Metzler · Phillip Schniter · Ashok Veeraraghavan · Richard Baraniuk -
2018 Oral: A Spline Theory of Deep Learning »
Randall Balestriero · Richard Baraniuk -
2018 Poster: Spline Filters For End-to-End Deep Learning »
Randall Balestriero · Romain Cosentino · Herve Glotin · Richard Baraniuk -
2018 Oral: Spline Filters For End-to-End Deep Learning »
Randall Balestriero · Romain Cosentino · Herve Glotin · Richard Baraniuk -
2017 Poster: Multi-fidelity Bayesian Optimisation with Continuous Approximations »
kirthevasan kandasamy · Gautam Dasarathy · Barnabás Póczos · Jeff Schneider -
2017 Poster: Optimal Densification for Fast and Accurate Minwise Hashing »
Anshumali Shrivastava -
2017 Talk: Optimal Densification for Fast and Accurate Minwise Hashing »
Anshumali Shrivastava -
2017 Talk: Multi-fidelity Bayesian Optimisation with Continuous Approximations »
kirthevasan kandasamy · Gautam Dasarathy · Barnabás Póczos · Jeff Schneider