Timezone: »
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning, but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality, and it outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
Author Information
Ian Covert (Stanford University)
Wei Qiu (University of washington)
MingYu Lu (University of Washington)
Na Yoon Kim (University of Washington)
Nathan White (University of Washington)
Su-In Lee (University of Washington)
More from the Same Authors
-
2021 : Disrupting Model Training with Adversarial Shortcuts »
Ivan Evtimov · Ian Covert · Aditya Kusupati · Tadayoshi Kohno -
2023 : Explanation-guided dynamic feature selection for medical risk prediction »
Nicasia Beebe-Wang · Wei Qiu · Su-In Lee -
2021 : Explainable AI for healthcare »
Su-In Lee