Skip to yearly menu bar Skip to main content


Poster

Fast Estimation of Partial Dependence Functions using Trees

Jinyang Liu · Tessa Steensgaard · Marvin N. Wright · Niklas Pfister · Munir Hiabu

East Exhibition Hall A-B #E-1107
[ ] [ ] [ Project Page ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Many existing interpretation methods are based on Partial Dependence (PD) functions that, for a pre-trained machine learning model, capture how a subset of the features affects the predictions by averaging over the remaining features. Notable methods include Shapley additive explanations (SHAP) which computes feature contributions based on a game theoretical interpretation and PD plots (i.e., 1-dim PD functions) that capture average marginal main effects. Recent work has connected these approaches using a functional decomposition and argues that SHAP values can be misleading since they merge main and interaction effects into a single local effect. However, a major advantage of SHAP compared to other PD-based interpretations has been the availability of fast estimation techniques, such as TreeSHAP. In this paper, we propose a new tree-based estimator, FastPD, which efficiently estimates arbitrary PD functions. We show that FastPD consistently estimates the desired population quantity -- in contrast to path-dependent TreeSHAP which is inconsistent when features are correlated. For moderately deep trees, FastPD improves the complexity of existing methods from quadratic to linear in the number of observations. By estimating PD functions for arbitrary feature subsets, FastPD can be used to extract PD-based interpretations such as SHAP, PD plots and higher-order interaction effects.

Lay Summary:

Many powerful AI systems work by combining hundreds of simple “if-then” rules to make predictions—but these systems are often opaque, making it difficult to understand why they reach a particular decision. To address this, we introduce FastPD, an efficient algorithm that breaks down a model’s output into understandable parts, showing how each input factor and their combinations contribute to a prediction. FastPD runs in time that grows linearly with the amount of data, instead of quadratically, by reusing calculations across data points, and its explanations become exact as more background data are used. By cutting explanation times and clearly revealing how each factor and its interactions drive a prediction, FastPD empowers analysts, regulators, and everyday users to trust, debug, and manage AI-driven decisions more effectively.

Live content is unavailable. Log in and register to view live content