Decision Tree Learning on Product Spaces
Arshia Soltani Moakhar ⋅ Faraz Ghahremani ⋅ Kiarash Banihashem ⋅ MohammadTaghi Hajiaghayi
Abstract
Decision tree learning has long been a central topic in theoretical computer science, driven by its practical importance. A fundamental and widely used method for decision tree construction is the top-down greedy heuristic, which recursively splits on the most influential variable. Despite its empirical success, theoretical analysis of this heuristic has been limited. A recent breakthrough by Blanc et al. (ITCS, 2020) provided the first rigorous theoretical guarantees for the greedy approach, but only under the uniform distribution. We extend this analysis to the more general and practically relevant setting of arbitrary product distributions. Our main result shows that for any function $f$ computable by an optimal decision tree of size $s$, maximum depth $D_{\text{opt}}$, and average depth $\Delta_{\text{opt}}$, the greedy heuristic constructs an $\epsilon$-approximating tree whose size grows at most with $\exp\bigl(\Delta_{\text{opt}} D_{\text{opt}} \log(e/\epsilon)\bigr)$. In the special case where the optimal tree is a full binary tree, this bound improves upon the bound of Blanc et al. and holds under a strictly broader class of distributions. Moreover, we present an algorithm based on the top-down greedy heuristic that is entirely **parameter-free**—it requires no prior knowledge of the optimal tree's size or depth—offering a practical advantage over Blanc et al.'s method.
Successful Page Load