Skip to yearly menu bar Skip to main content


Poster

Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments

Jinkun Lin · Anqi Zhang · Mathias Lécuyer · Jinyang Li · Aurojit Panda · Siddhartha Sen

Hall E #936

Keywords: [ SA: Trustworthy Machine Learning ] [ MISC: Causality ] [ SA: Accountability, Transparency and Interpretability ]


Abstract: We develop a new, principled algorithm for estimating the contribution of training data points to the behavior of a deep learning model, such as a specific prediction it makes. Our algorithm estimates the AME, a quantity that measures the expected (average) marginal effect of adding a data point to a subset of the training data, sampled from a given distribution. When subsets are sampled from the uniform distribution, the AME reduces to the well-known Shapley value. Our approach is inspired by causal inference and randomized experiments: we sample different subsets of the training data to train multiple submodels, and evaluate each submodel's behavior. We then use a LASSO regression to jointly estimate the AME of each data point, based on the subset compositions. Under sparsity assumptions ($k \ll N$ datapoints have large AME), our estimator requires only $O(k\log N)$ randomized submodel trainings, improving upon the best prior Shapley value estimators.

Chat is not available.