Timezone: »

Parsimonious Learning-Augmented Caching
Sungjin Im · Ravi Kumar · Aditya Petety · Manish Purohit

Thu Jul 21 07:45 AM -- 07:50 AM (PDT) @ Room 310

Learning-augmented algorithms have emerged as a framework to go beyond worst-case analysis by augmenting traditional algorithms with machine-learned predictions. The overarching goal is to design algorithms that perform near-optimally when the predictions are accurate yet retain certain worst-case guarantees irrespective of the accuracy of the predictions. This framework has been successfully applied to online problems such as caching since the predictions can be used to alleviate uncertainties.In this paper we introduce and study the setting in which the algorithm can utilize the predictions parsimoniously. We consider the caching problem, which has been extensively studied in the learning-augmented setting, and show that one can achieve quantitatively similar results but only using a \emph{sublinear} number of predictions.

Author Information

Sungjin Im (University of California, Merced)
Ravi Kumar (Google)
Aditya Petety (University of California Merced)
Manish Purohit (Google Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors