Timezone: »
Poster
Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost
Dan Qiao · Ming Yin · Ming Min · Yu-Xiang Wang
We study the problem of reinforcement learning (RL) with low (policy) switching cost — a problem well-motivated by real-life RL applications in which deployments of new policies are costly and the number of policy updates must be low. In this paper, we propose a new algorithm based on stage-wise exploration and adaptive policy elimination that achieves a regret of $\widetilde{O}(\sqrt{H^4S^2AT})$ while requiring a switching cost of $O(HSA \log\log T)$. This is an exponential improvement over the best-known switching cost $O(H^2SA\log T)$ among existing methods with $\widetilde{O}(\mathrm{poly}(H,S,A)\sqrt{T})$ regret. In the above, $S,A$ denotes the number of states and actions in an $H$-horizon episodic Markov Decision Process model with unknown transitions, and $T$ is the number of steps. As a byproduct of our new techniques, we also derive a reward-free exploration algorithm with a switching cost of $O(HSA)$. Furthermore, we prove a pair of information-theoretical lower bounds which say that (1) Any no-regret algorithm must have a switching cost of $\Omega(HSA)$; (2) Any $\widetilde{O}(\sqrt{T})$ regret algorithm must incur a switching cost of $\Omega(HSA\log\log T)$. Both our algorithms are thus optimal in their switching costs.
Author Information
Dan Qiao (UCSB)
Ming Yin (UC Santa Barbara)
Ming Min (University of California, Santa Barbara)
Yu-Xiang Wang (UC Santa Barbara)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost »
Thu. Jul 21st 03:30 -- 03:35 PM Room Ballroom 3 & 4
More from the Same Authors
-
2021 : Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings »
Ming Yin · Yu-Xiang Wang -
2021 : Near-Optimal Offline Reinforcement Learning via Double Variance Reduction »
Ming Yin · Yu Bai · Yu-Xiang Wang -
2022 : Optimal Dynamic Regret in LQR Control »
Dheeraj Baby · Yu-Xiang Wang -
2023 : A Privacy-Friendly Approach to Data Valuation »
Jiachen Wang · Yuqing Zhu · Yu-Xiang Wang · Ruoxi Jia · Prateek Mittal -
2023 : Why Quantization Improves Generalization: NTK of Binary Weight Neural Network »
Kaiqi Zhang · Ming Yin · Yu-Xiang Wang -
2023 : Generative Autoencoders as Watermark Attackers: Analyses of Vulnerabilities and Threats »
Xuandong Zhao · Kexun Zhang · Yu-Xiang Wang · Lei Li -
2023 : Provable Robust Watermarking for AI-Generated Text »
Xuandong Zhao · Prabhanjan Ananth · Lei Li · Yu-Xiang Wang -
2023 Poster: Offline Reinforcement Learning with Closed-Form Policy Improvement Operators »
Jiachen Li · Edwin Zhang · Ming Yin · Jerry Bai · Yu-Xiang Wang · William Wang -
2023 Poster: Protecting Language Generation Models via Invisible Watermarking »
Xuandong Zhao · Yu-Xiang Wang · Lei Li -
2023 Poster: Differentially Private Optimization on Large Model at Small Cost »
Zhiqi Bu · Yu-Xiang Wang · Sheng Zha · George Karypis -
2023 Poster: Directed Chain Generative Adversarial Networks »
Ming Min · Ruimeng Hu · Tomoyuki Ichiba -
2023 Poster: Non-stationary Reinforcement Learning under General Function Approximation »
Songtao Feng · Ming Yin · Ruiquan Huang · Yu-Xiang Wang · Jing Yang · Yingbin LIANG -
2023 Poster: Global Optimization with Parametric Function Approximation »
Chong Liu · Yu-Xiang Wang -
2021 Poster: Signatured Deep Fictitious Play for Mean Field Games with Common Noise »
Ming Min · Ruimeng Hu -
2021 Spotlight: Signatured Deep Fictitious Play for Mean Field Games with Common Noise »
Ming Min · Ruimeng Hu