Differentially Private Cross-Silo Recommendation from Implicit Feedback
Abstract
Cross-silo recommendation from implicit feedback is a key task in modern recommender systems, where user-item interaction data are distributed across multiple parties and cannot be centrally collected. Unlike explicit feedback, which provides fully observed real-valued ratings, implicit feedback is one-class and extremely sparse, recording only users' actions or inactions (e.g., clicks, visits, or bookmarks), yet it is far more prevalent in real-world applications. Such behavioral data are often highly sensitive, raising significant privacy concerns when used for collaborative model training. Although differential privacy (DP) has been widely applied to explicit feedback-based models, the resulting utility degradation makes it difficult to apply DP effectively to implicit feedback learning. In this work, we propose DPIMF, a differentially private implicit matrix factorization framework for cross-silo recommendation based on objective perturbation. To improve utility, we redesign the loss function and adopt an importance sampling scheme to reduce the noise scale required for privacy preservation. We further provide formal utility guarantees for the proposed techniques and characterize the conditions under which utility improvements are maximized. Experiments on three benchmark datasets validate our theoretical analysis and demonstrate that DPIMF achieves a better privacy-utility trade-off than state-of-the-art methods.