Oral
Making Deep Q-learning methods robust to time discretization
Corentin Tallec · Leonard Blier · Yann Ollivier

Tue Jun 11th 11:20 -- 11:25 AM @ Hall B

Despite remarkable successes, Deep Reinforce- ment Learning (DRL) is not robust to hyperparam- eterization, implementation details, or small envi- ronment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time dis- cretization in near continuous-time environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller. Empirically, we find that Q-learning-based approaches such as Deep Q- learning (Mnih et al., 2015) and Deep Determinis- tic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically.

Author Information

Corentin Tallec (Univ. Paris-Sud)
Leonard Blier (Université Paris Sud and Facebook)
Yann Ollivier (Facebook Artificial Intelligence Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors