Timezone: »
We introduce the concrete autoencoder, an end-to-end differentiable method for global feature selection, which efficiently identifies a subset of the most informative features and simultaneously learns a neural network to reconstruct the input data from the selected features. Our method is unsupervised, and is based on using a concrete selector layer as the encoder and using a standard neural network as the decoder. During the training phase, the temperature of the concrete selector layer is gradually decreased, which encourages a user-specified number of discrete features to be learned; during test time, the selected features can be used with the decoder network to reconstruct the remaining input features. We evaluate concrete autoencoders on a variety of datasets, where they significantly outperform state-of-the-art methods for feature selection and data reconstruction. In particular, on a large-scale gene expression dataset, the concrete autoencoder selects a small subset of genes whose expression levels can be used to impute the expression levels of the remaining genes; in doing so, it improves on the current widely-used expert-curated L1000 landmark genes, potentially reducing measurement costs by 20%. The concrete autoencoder can be implemented by adding just a few lines of code to a standard autoencoder, and the code for the algorithm and experiments is publicly available.
Author Information
Muhammed Fatih Balın (Bogazici )
Abubakar Abid (Stanford)
James Zou (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Concrete Autoencoders: Differentiable Feature Selection and Reconstruction »
Thu. Jun 13th 06:20 -- 06:25 PM Room Room 103
More from the Same Authors
-
2021 : Meaningfully Explaining a Model's Mistakes »
· Abubakar Abid · James Zou -
2021 : Meaningfully Explaining a Model's Mistakes »
Abubakar Abid · James Zou -
2021 : MetaDataset: A Dataset of Datasets for Evaluating Distribution Shifts and Training Conflicts »
Weixin Liang · James Zou · Weixin Liang -
2021 : Have the Cake and Eat It Too? Higher Accuracy and Less Expense when Using Multi-label ML APIs Online »
Lingjiao Chen · James Zou · Matei Zaharia -
2021 : Machine Learning API Shift Assessments: Change is Coming! »
Lingjiao Chen · James Zou · Matei Zaharia -
2021 : Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions »
Kailas Vodrahalli · James Zou -
2022 : On the nonlinear correlation of ML performance across data subpopulations »
Weixin Liang · Yining Mao · Yongchan Kwon · Xinyu Yang · James Zou -
2023 : Improve Model Inference Cost with Image Gridding »
Shreyas Krishnaswamy · Lisa Dunlap · Lingjiao Chen · Matei Zaharia · James Zou · Joseph Gonzalez -
2023 : Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value »
Yongchan Kwon · James Zou -
2022 : GSCLIP : A Framework for Explaining Distribution Shifts in Natural Language »
Zhiying Zhu · Weixin Liang · James Zou -
2022 : Evaluation of ML in Health/Science »
James Zou -
2022 : Data Sculpting: Interpretable Algorithm for End-to-End Cohort Selection »
Ruishan Liu · James Zou -
2022 : Data Budgeting for Machine Learning »
Weixin Liang · James Zou -
2022 Poster: When and How Mixup Improves Calibration »
Linjun Zhang · Zhun Deng · Kenji Kawaguchi · James Zou -
2022 Poster: Efficient Online ML API Selection for Multi-Label Classification Tasks »
Lingjiao Chen · Matei Zaharia · James Zou -
2022 Poster: Improving Out-of-Distribution Robustness via Selective Augmentation »
Huaxiu Yao · Yu Wang · Sai Li · Linjun Zhang · Weixin Liang · James Zou · Chelsea Finn -
2022 Spotlight: Efficient Online ML API Selection for Multi-Label Classification Tasks »
Lingjiao Chen · Matei Zaharia · James Zou -
2022 Spotlight: Improving Out-of-Distribution Robustness via Selective Augmentation »
Huaxiu Yao · Yu Wang · Sai Li · Linjun Zhang · Weixin Liang · James Zou · Chelsea Finn -
2022 Spotlight: When and How Mixup Improves Calibration »
Linjun Zhang · Zhun Deng · Kenji Kawaguchi · James Zou -
2022 Poster: Meaningfully debugging model mistakes using conceptual counterfactual explanations »
Abubakar Abid · Mert Yuksekgonul · James Zou -
2022 Spotlight: Meaningfully debugging model mistakes using conceptual counterfactual explanations »
Abubakar Abid · Mert Yuksekgonul · James Zou -
2021 Poster: Improving Generalization in Meta-learning via Task Augmentation »
Huaxiu Yao · Long-Kai Huang · Linjun Zhang · Ying WEI · Li Tian · James Zou · Junzhou Huang · Zhenhui (Jessie) Li -
2021 Spotlight: Improving Generalization in Meta-learning via Task Augmentation »
Huaxiu Yao · Long-Kai Huang · Linjun Zhang · Ying WEI · Li Tian · James Zou · Junzhou Huang · Zhenhui (Jessie) Li -
2021 Poster: How to Learn when Data Reacts to Your Model: Performative Gradient Descent »
Zachary Izzo · Lexing Ying · James Zou -
2021 Spotlight: How to Learn when Data Reacts to Your Model: Performative Gradient Descent »
Zachary Izzo · Lexing Ying · James Zou -
2020 Poster: A Distributional Framework For Data Valuation »
Amirata Ghorbani · Michael Kim · James Zou -
2019 Poster: Discovering Conditionally Salient Features with Statistical Guarantees »
Jaime Roquero Gimenez · James Zou -
2019 Oral: Discovering Conditionally Salient Features with Statistical Guarantees »
Jaime Roquero Gimenez · James Zou -
2019 Poster: Data Shapley: Equitable Valuation of Data for Machine Learning »
Amirata Ghorbani · James Zou -
2019 Oral: Data Shapley: Equitable Valuation of Data for Machine Learning »
Amirata Ghorbani · James Zou -
2018 Poster: CoVeR: Learning Covariate-Specific Vector Representations with Tensor Decompositions »
Kevin Tian · Teng Zhang · James Zou -
2018 Oral: CoVeR: Learning Covariate-Specific Vector Representations with Tensor Decompositions »
Kevin Tian · Teng Zhang · James Zou