Towards Optimal Robustness in Learning-Augmented Paging
Peng Chen ⋅ Hailiang Zhao ⋅ Xueyan Tang ⋅ Yixuan Wang ⋅ Shuiguang Deng
Abstract
Learning-augmented paging has been extensively studied in recent years. A key advantage over naive ML-based approaches is \emph{bounded robustness}, which guarantees worst-case performance even when predictions are inaccurate, making these algorithms valuable for real-world systems. Prior work achieves robustness bounds of $2H_k + O(1)$ in the randomized setting, leaving a gap to the optimal competitive ratio $H_k$. We are the first to study how to close this gap. In this paper, we begin by analyzing online optimality and provide a new proof of the latest $H_k$-competitive algorithm, which facilitates analysis in the learning-augmented setting. Then, we review existing learning-augmented paging algorithms and introduce a unifying primitive, the \emph{relative prediction budget}, which captures the essence of how to establish robustness and reveals that prior algorithms either overuse or underutilize predictions. Guided by the above analysis, we develop a new framework that achieves the best-possible robustness for learning-augmented paging: $H_k + O(1)$. Experiments further demonstrate strong practical performance.
Successful Page Load