Skip to yearly menu bar Skip to main content


Poster
in
Workshop: HiLD: High-dimensional Learning Dynamics Workshop

Deep Neural Networks Extrapolate Cautiously in High Dimensions

Katie Kang · Amrith Setlur · Claire Tomlin · Sergey Levine


Abstract:

Conventional wisdom suggests that neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs. Our work aims to reassess this assumption, particularly with regards to neural networks with high-dimensional inputs. We find that as input data becomes increasingly OOD, neural network predictions actually tend to converge towards a constant value, rather than extrapolating in arbitrary ways. Furthermore, this value often closely approximates the optimal input-independent solution that minimizes training loss, which corresponds to a more cautious prediction for many common machine learning losses. Our empirical investigation suggests that this phenomenon exists across a broad array of datasets, distributional shifts, and loss functions. Furthermore, we study the mechanism responsible for this observed behavior, providing both an empirical and theoretical analysis.

Chat is not available.