Smaller, more accurate regression forests using tree alternating optimization

Arman Zharmagambetov · Miguel Carreira-Perpinan


Keywords: [ Supervised Learning ] [ Boosting / Ensemble Methods ] [ Other ]

[ Abstract ]
[ Slides
Tue 14 Jul 11 a.m. PDT — 11:45 a.m. PDT
Wed 15 Jul midnight PDT — 12:45 a.m. PDT


Regression forests, based on ensemble approaches such as bagging or boosting, have long been recognized as the leading off-the-shelf method for regression. However, forests rely on a greedy top-down procedure such as CART to learn each tree. We extend a recent algorithm for learning classification trees, Tree Alternating Optimization (TAO), to the regression case, and use it with bagging to construct regression forests of oblique trees, having hyperplane splits at the decision nodes. In a wide range of datasets, we show that the resulting forests exceed the accuracy of state-of-the-art algorithms such as random forests, AdaBoost or gradient boosting, often considerably, while yielding forests that have usually fewer and shallower trees and hence fewer parameters and faster inference overall. This result has an immense practical impact and advocates for the power of optimization in ensemble learning.

Chat is not available.