A Regret Minimization Approach to Iterative Learning Control

Naman Agarwal · Elad Hazan · Anirudha Majumdar · Karan Singh

[ Abstract ] [ Livestream: Visit Reinforcement Learning 12 ] [ Paper ]
Wed 21 Jul 6:35 a.m. — 6:40 a.m. PDT
[ Paper ]

We consider the setting of iterative learning control, or model-based policy learning in the presence of uncertain, time-varying dynamics. In this setting, we propose a new performance metric, planning regret, which replaces the standard stochastic uncertainty assumptions with worst case regret. Based on recent advances in non-stochastic control, we design a new iterative algorithm for minimizing planning regret that is more robust to model mismatch and uncertainty. We provide theoretical and empirical evidence that the proposed algorithm outperforms existing methods on several benchmarks.

Chat is not available.