Skip to yearly menu bar Skip to main content


Poster

Robust Decision Trees Against Adversarial Examples

Hongge Chen · Huan Zhang · Duane Boning · Cho-Jui Hsieh

Pacific Ballroom #130

Keywords: [ Supervised Learning ] [ Ensemble Methods ] [ Adversarial Examples ]


Abstract:

Although adversarial examples and model robust-ness have been extensively studied in the context of neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited. In this paper, we show that tree-based models are also vulnerable to adversarial examples and develop a novel algorithm to learn robust trees. At its core, our method aims to optimize the performance under the worst-case perturbation of input features, which leads to a max-min saddle point problem. Incorporating this saddle point objective into the decision tree building procedure is non-trivial due to the discrete nature of trees—a naive approach to finding the best split according to this saddle point objective will take exponential time. To make our approach practical and scalable, we propose efficient tree building algorithms by approximating the inner minimizer in the saddlepoint problem, and present efficient implementations for classical information gain based trees as well as state-of-the-art tree boosting systems such as XGBoost. Experimental results on real world datasets demonstrate that the proposed algorithms can significantly improve the robustness of tree-based models against adversarial examples.

Live content is unavailable. Log in and register to view live content