Obliviate: Efficient Unlearning in Recommender Systems
Abstract
Machine unlearning is becoming increasingly critical in the context of data privacy regulations, particularly for recommender systems that are directly trained on user interaction data. The goal of this work is to remove designated interactions and their downstream influence while preserving recommendation quality, and to do so without incurring the substantial computational cost of full retraining. Existing approaches exhibit inherent trade-offs, including limited unlearning completeness, poor scalability, degradation in recommendation utility, or substantial computational and memory overhead. In this paper, we propose Obliviate, an efficient two-stage unlearning framework for recommender systems that achieves strong unlearning completeness while maintaining high utility. In the first stage, we introduce a Low-Rank Unlearning Adapter (LUA), which employs a lightweight Hessian proxy to enable curvature-aware and parameter-efficient unlearning through localized low-rank adapter modules. In the second stage, we present Locality-Aware Calibration (LAC), a lightweight refinement procedure that updates only the adapter parameters using a compact witness set, enforcing unlearning via ranking-based objectives while preserving utility through knowledge distillation. Extensive empirical evaluations demonstrate that Obliviate, achieves strong forgetting with minimal loss in recommendation quality and at significantly reduced computational cost, offering a practical and scalable solution for large-scale recommender systems.