Skip to yearly menu bar Skip to main content


Spotlight

Connecting Interpretability and Robustness in Decision Trees through Separation

Michal Moshkovitz · Yao-Yuan Yang · Kamalika Chaudhuri

[ ] [ Livestream: Visit Optimization and Algorithms 2 ] [ Paper ]
[ Paper ]

Abstract:

Recent research has recognized interpretability and robustness as essential properties of trustworthy classification. Curiously, a connection between robustness and interpretability was empirically observed, but the theoretical reasoning behind it remained elusive. In this paper, we rigorously investigate this connection. Specifically, we focus on interpretation using decision trees and robustness to l_{\infty}-perturbation. Previous works defined the notion of r-separation as a sufficient condition for robustness. We prove upper and lower bounds on the tree size in case the data is r-separated. We then show that a tighter bound on the size is possible when the data is linearly separated. We provide the first algorithm with provable guarantees both on robustness, interpretability, and accuracy in the context of decision trees. Experiments confirm that our algorithm yields classifiers that are both interpretable and robust and have high accuracy.

Chat is not available.