Skip to yearly menu bar Skip to main content


Poster

Discovering Conditionally Salient Features with Statistical Guarantees

Jaime Roquero Gimenez · James Zou

Pacific Ballroom #173

Keywords: [ Supervised Learning ] [ Statistical Learning Theory ] [ Interpretability ]


Abstract: The goal of feature selection is to identify important features that are relevant to explain a outcome variable. Most of the work in this domain has focused on identifying \emph{globally} relevant features, which are features that are related to the outcome using evidence across the entire dataset. We study a more fine-grained statistical problem: \emph{conditional feature selection}, where a feature may be relevant depending on the values of the other features. For example in genetic association studies, variant $A$ could be associated with the phenotype in the entire dataset, but conditioned on variant $B$ being present it might be independent of the phenotype. In this sense, variant $A$ is globally relevant, but conditioned on $B$ it is no longer locally relevant in that region of the feature space. We present a generalization of the knockoff procedure that performs \emph{conditional feature selection} while controlling a generalization of the false discovery rate (FDR) to the conditional setting. By exploiting the feature/response model-free framework of the knockoffs, the quality of the statistical FDR guarantee is not degraded even when we perform conditional feature selections. We implement this method and present an algorithm that automatically partitions the feature space such that it enhances the differences between selected sets in different regions, and validate the statistical theoretical results with experiments.

Live content is unavailable. Log in and register to view live content