Skip to yearly menu bar Skip to main content


Contributed talk
in
Workshop: Uncertainty and Robustness in Deep Learning

Detecting Extrapolation with Influence Functions

David Madras


Abstract:

In this work, we explore principled methods for extrapolation detection. We define extrapolation as occurring when a model’s conclusion at a test point is underdetermined by the training data. Our metrics for detecting extrapolation are based on influence functions, inspired by the intuition that a point requires extrapolation if its inclusion in the training set would significantly change the model’s learned parameters. We provide interpretations of our methods in terms of the eigendecomposition of the Hessian. We present experimental evidence that our method is capable of identifying extrapolation to out-of-distribution points.

Live content is unavailable. Log in and register to view live content