Timezone: »

Representer Point Selection for Explaining Regularized High-dimensional Models
Che-Ping Tsai · Jiong Zhang · Hsiang-Fu Yu · Eli Chien · Cho-Jui Hsieh · Pradeep Ravikumar

Wed Jul 26 02:00 PM -- 03:30 PM (PDT) @ Exhibit Hall 1 #516
We introduce a novel class of sample-based explanations we term *high-dimensional representers*, that can be used to explain the predictions of a regularized high-dimensional model in terms of importance weights for each of the training samples. Our workhorse is a novel representer theorem for general regularized high-dimensional models, which decomposes the model prediction in terms of contributions from each of the training samples: with positive (negative) values corresponding to positive (negative) impact training samples to the model's prediction. We derive consequences for the canonical instances of $\ell_1$ regularized sparse models and nuclear norm regularized low-rank models. As a case study, we further investigate the application of low-rank models in the context of collaborative filtering, where we instantiate high-dimensional representers for specific popular classes of models. Finally, we study the empirical performance of our proposed methods on three real-world binary classification datasets and two recommender system datasets. We also showcase the utility of high-dimensional representers in explaining model recommendations.

Author Information

Che-Ping Tsai (Carnegie Mellon University)
Jiong Zhang (Amazon)
Hsiang-Fu Yu (Amazon)
Eli Chien (University of Illinois Urbana-Champaign)
Cho-Jui Hsieh (UCLA)
Pradeep Ravikumar (Carnegie Mellon University)

More from the Same Authors