Timezone: »
Poster
Exploration Conscious Reinforcement Learning Revisited
Lior Shani · Yonathan Efroni · Shie Mannor
The Exploration-Exploitation tradeoff arises in Reinforcement Learning when one cannot tell if a policy is optimal. Then, there is a constant need to explore new actions instead of exploiting past experience. In practice, it is common to resolve the tradeoff by using a fixed exploration mechanism, such as $\epsilon$-greedy exploration or by adding Gaussian noise, while still trying to learn an optimal policy. In this work, we take a different approach and study exploration-conscious criteria, that result in optimal policies with respect to the exploration mechanism. Solving these criteria, as we establish, amounts to solving a surrogate Markov Decision Process. We continue and analyze properties of exploration-conscious optimal policies and characterize two general approaches to solve such criteria. Building on the approaches, we apply simple changes in existing tabular and deep Reinforcement Learning algorithms and empirically demonstrate superior performance relatively to their non-exploration-conscious counterparts, both for discrete and continuous action spaces.
Author Information
Lior Shani (Technion)
Yonathan Efroni (Technion)
Shie Mannor (Technion)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Exploration Conscious Reinforcement Learning Revisited »
Thu Jun 13th 12:10 -- 12:15 AM Room Room 104
More from the Same Authors
-
2020 Poster: Optimistic Policy Optimization with Bandit Feedback »
Lior Shani · Yonathan Efroni · Aviv Rosenberg · Shie Mannor -
2020 Poster: Topic Modeling via Full Dependence Mixtures »
Dan Fisher · Mark Kozdoba · Shie Mannor -
2020 Poster: Multi-step Greedy Reinforcement Learning Algorithms »
Manan Tomar · Yonathan Efroni · Mohammad Ghavamzadeh -
2019 Poster: Action Robust Reinforcement Learning and Applications in Continuous Control »
Chen Tessler · Chen Tessler · Yonathan Efroni · Shie Mannor -
2019 Poster: The Natural Language of Actions »
Guy Tennenholtz · Shie Mannor -
2019 Oral: The Natural Language of Actions »
Guy Tennenholtz · Shie Mannor -
2019 Poster: Nonlinear Distributional Gradient Temporal-Difference Learning »
chao qu · Shie Mannor · Huan Xu -
2019 Oral: Action Robust Reinforcement Learning and Applications in Continuous Control »
Chen Tessler · Chen Tessler · Yonathan Efroni · Yonathan Efroni · Shie Mannor · Shie Mannor -
2019 Oral: Nonlinear Distributional Gradient Temporal-Difference Learning »
chao qu · Shie Mannor · Huan Xu -
2018 Poster: Beyond the One-Step Greedy Approach in Reinforcement Learning »
Yonathan Efroni · Gal Dalal · Bruno Scherrer · Shie Mannor -
2018 Oral: Beyond the One-Step Greedy Approach in Reinforcement Learning »
Yonathan Efroni · Gal Dalal · Bruno Scherrer · Shie Mannor -
2017 Workshop: Lifelong Learning: A Reinforcement Learning Approach »
Sarath Chandar · Balaraman Ravindran · Daniel J. Mankowitz · Shie Mannor · Tom Zahavy -
2017 Poster: Consistent On-Line Off-Policy Evaluation »
Assaf Hallak · Shie Mannor -
2017 Talk: Consistent On-Line Off-Policy Evaluation »
Assaf Hallak · Shie Mannor -
2017 Poster: End-to-End Differentiable Adversarial Imitation Learning »
Nir Baram · Oron Anschel · Itai Caspi · Shie Mannor -
2017 Poster: Multi-objective Bandits: Optimizing the Generalized Gini Index »
Robert Busa-Fekete · Balazs Szorenyi · Paul Weng · Shie Mannor -
2017 Talk: End-to-End Differentiable Adversarial Imitation Learning »
Nir Baram · Oron Anschel · Itai Caspi · Shie Mannor -
2017 Talk: Multi-objective Bandits: Optimizing the Generalized Gini Index »
Robert Busa-Fekete · Balazs Szorenyi · Paul Weng · Shie Mannor