Timezone: »
We consider the online one-class collaborative filtering (CF) problem that consist of recommending items to users over time in an online fashion based on positive ratings only. This problem arises when users respond only occasionally to a recommendation with a positive rating, and never with a negative one. We study the impact of the probability of a user responding to a recommendation, pf, on the sample complexity, and ask whether receiving positive and negative ratings, instead of positive ratings only, improves the sample complexity. Both questions arise in the design of recommender systems. We introduce a simple probabilistic user model, and analyze the performance of an online user-based CF algorithm. We prove that after an initial cold start phase, where recommendations are invested in exploring the user's preferences, this algorithm makes---up to a fraction of the recommendations required for updating the user's preferences---perfect recommendations. The number of ratings required for the cold start phase is nearly proportional to 1/pf, and that for updating the user's preferences is essentially independent of pf. As a consequence we find that, receiving positive and negative ratings instead of only positive ones improves the number of ratings required for initial exploration by a factor of 1/pf, which can be significant.
Author Information
Reinhard Heckel (UC Berkeley)
Reinhard Heckel is a postdoctoral researcher in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Before that, he spent a year in the Cognitive Computing & Computational Sciences Department at IBM Research, Zurich. Reinhard completed his Ph.D. in August 2014 at ETH Zurich, Department of Information Technology and Electrical Engineering, advised by Helmut Bölcskei. In Fall 2013, he was a visiting Ph.D. student in the Statistics Department of Stanford University. Reinhard is interested in various topics in machine learning, mathematical signal processing, sparse signal recovery, and computational biology.
Kannan Ramchandran (UC Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: The Sample Complexity of Online One-Class Collaborative Filtering »
Mon. Aug 7th 08:30 AM -- 12:00 PM Room Gallery #38
More from the Same Authors
-
2022 Poster: Neurotoxin: Durable Backdoors in Federated Learning »
Zhengming Zhang · Ashwinee Panda · Linyue Song · Yaoqing Yang · Michael Mahoney · Prateek Mittal · Kannan Ramchandran · Joseph E Gonzalez -
2022 Spotlight: Neurotoxin: Durable Backdoors in Federated Learning »
Zhengming Zhang · Ashwinee Panda · Linyue Song · Yaoqing Yang · Michael Mahoney · Prateek Mittal · Kannan Ramchandran · Joseph E Gonzalez -
2019 Poster: Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2019 Oral: Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2019 Poster: Rademacher Complexity for Adversarially Robust Generalization »
Dong Yin · Kannan Ramchandran · Peter Bartlett -
2019 Oral: Rademacher Complexity for Adversarially Robust Generalization »
Dong Yin · Kannan Ramchandran · Peter Bartlett -
2018 Poster: Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2018 Oral: Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett