Skip to yearly menu bar Skip to main content


Poster

DNNR: Differential Nearest Neighbors Regression

Youssef Nader · Leon Sixt · Tim Landgraf

Hall E #600

Keywords: [ MISC: Supervised Learning ] [ SA: Accountability, Transparency and Interpretability ] [ Theory ] [ MISC: General Machine Learning Techniques ]


Abstract:

K-nearest neighbors (KNN) is one of the earliest and most established algorithms in machine learning. For regression tasks, KNN averages the targets within a neighborhood which poses a number of challenges: the neighborhood definition is crucial for the predictive performance as neighbors might be selected based on uninformative features, and averaging does not account for how the function changes locally. We propose a novel method called Differential Nearest Neighbors Regression (DNNR) that addresses both issues simultaneously: during training, DNNR estimates local gradients to scale the features; during inference, it performs an n-th order Taylor approximation using estimated gradients. In a large-scale evaluation on over 250 datasets, we find that DNNR performs comparably to state-of-the-art gradient boosting methods and MLPs while maintaining the simplicity and transparency of KNN. This allows us to derive theoretical error bounds and inspect failures. In times that call for transparency of ML models, DNNR provides a good balance between performance and interpretability.

Chat is not available.