Feature Bagging Provides Stability
Abstract
We study the stability properties of feature bagging, an ensemble technique that improves robustness by training each learner on a randomly selected subset of features. We introduce feature stability (FS), a notion that quantifies the sensitivity of an algorithm’s output to the removal of a single feature. This notion complements classical instance stability (IS) and together provides a more comprehensive framework for evaluating algorithmic stability. Within this framework, we analyze feature bagging in both a parametric linear model and an assumption-free setting inspired by recursive feature subsampling in random forests. In both cases, we establish formal stability guarantees showing that feature bagging strictly outperforms its non-bagged counterpart, with larger gains achieved at smaller subsampling ratios, and that only a modest number of bagging rounds is sufficient to attain near-optimal stability.