Timezone: »
Classical value iteration approaches are not applicable to environments with continuous states and actions. For such environments the states and actions must be discretized, which leads to an exponential increase in computational complexity. In this paper, we propose continuous fitted value iteration (cFVI). This algorithm enables dynamic programming for continuous states and actions with a known dynamics model. Exploiting the continuous time formulation, the optimal policy can be derived for non-linear control-affine dynamics. This closed-form solution enables the efficient extension of value iteration to continuous environments. We show in non-linear control experiments that the dynamic programming solution obtains the same quantitative performance as deep reinforcement learning methods in simulation but excels when transferred to the physical system.The policy obtained by cFVI is more robust to changes in the dynamics despite using only a deterministic model and without explicitly incorporating robustness in the optimization
Author Information
Michael Lutter (Technical University of Darmstadt)
Shie Mannor (Technion)
Jan Peters (TU Darmstadt)
Dieter Fox (NVIDIA)
Animesh Garg (University of Toronto, Vector Institute, Nvidia)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Value Iteration in Continuous Actions, States and Time »
Wed. Jul 21st 02:45 -- 02:50 PM Room
More from the Same Authors
-
2020 : A Differentiable Newton Euler Algorithm for Multi-body Model Learning »
Michael Lutter -
2021 : Auditing AI models for Verified Deployment under Semantic Specifications »
Homanga Bharadhwaj · De-An Huang · Chaowei Xiao · Anima Anandkumar · Animesh Garg -
2021 : Optimistic Exploration with Backward Bootstrapped Bonus for Deep Reinforcement Learning »
Chenjia Bai · Lingxiao Wang · Lei Han · Jianye Hao · Animesh Garg · Peng Liu · Zhaoran Wang -
2021 : Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings »
Shunshi Zhang · Murat Erdogdu · Animesh Garg -
2021 : Exploration via Empowerment Gain: Combining Novelty, Surprise and Learning Progress »
Philip Becker-Ehmck · Maximilian Karl · Jan Peters · Patrick van der Smagt -
2021 : Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos »
Haoyu Xiong · Yun-Chun Chen · Homanga Bharadhwaj · Samrath Sinha · Animesh Garg -
2022 : VIPer: Iterative Value-Aware Model Learning on the Value Improvement Path »
Romina Abachi · Claas Voelcker · Animesh Garg · Amir-massoud Farahmand -
2022 : MoCoDA: Model-based Counterfactual Data Augmentation »
Silviu Pitis · Elliot Creager · Ajay Mandlekar · Animesh Garg -
2023 : Optimization or Architecture: What Matters in Non-Linear Filtering? »
Ido Greenberg · Netanel Yannay · Shie Mannor -
2023 : Parameterized projected Bellman operator »
Théo Vincent · Alberto Maria Metelli · Jan Peters · Marcello Restelli · Carlo D'Eramo -
2023 : Optimization or Architecture: What Matters in Non-Linear Filtering? »
Ido Greenberg · Netanel Yannay · Shie Mannor -
2023 : Optimization or Architecture: What Matters in Non-Linear Filtering? »
Ido Greenberg · Netanel Yannay · Shie Mannor -
2023 Poster: Learning to Initiate and Reason in Event-Driven Cascading Processes »
Yuval Atzmon · Eli Meirom · Shie Mannor · Gal Chechik -
2023 Poster: Learning Hidden Markov Models When the Locations of Missing Observations are Unknown »
BINYAMIN PERETS · Mark Kozdoba · Shie Mannor -
2023 Poster: PPG Reloaded: An Empirical Study on What Matters in Phasic Policy Gradient »
Kaixin Wang · Zhou Daquan · Jiashi Feng · Shie Mannor -
2023 Poster: Representation-Driven Reinforcement Learning »
Ofir Nabati · Guy Tennenholtz · Shie Mannor -
2023 Poster: Reward-Mixing MDPs with Few Latent Contexts are Learnable »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2022 Poster: Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics »
Matthias Weissenbacher · Samrath Sinha · Animesh Garg · Yoshinobu Kawahara -
2022 Spotlight: Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics »
Matthias Weissenbacher · Samrath Sinha · Animesh Garg · Yoshinobu Kawahara -
2022 Poster: Curriculum Reinforcement Learning via Constrained Optimal Transport »
Pascal Klink · Haoyi Yang · Carlo D'Eramo · Jan Peters · Joni Pajarinen -
2022 Poster: Analysis of Stochastic Processes through Replay Buffers »
Shirli Di-Castro Shashua · Shie Mannor · Dotan Di Castro -
2022 Poster: Actor-Critic based Improper Reinforcement Learning »
Mohammadi Zaki · Avi Mohan · Aditya Gopalan · Shie Mannor -
2022 Poster: Optimizing Tensor Network Contraction Using Reinforcement Learning »
Eli Meirom · Haggai Maron · Shie Mannor · Gal Chechik -
2022 Poster: The Geometry of Robust Value Functions »
Kaixin Wang · Navdeep Kumar · Kuangqi Zhou · Bryan Hooi · Jiashi Feng · Shie Mannor -
2022 Spotlight: Curriculum Reinforcement Learning via Constrained Optimal Transport »
Pascal Klink · Haoyi Yang · Carlo D'Eramo · Jan Peters · Joni Pajarinen -
2022 Spotlight: The Geometry of Robust Value Functions »
Kaixin Wang · Navdeep Kumar · Kuangqi Zhou · Bryan Hooi · Jiashi Feng · Shie Mannor -
2022 Spotlight: Actor-Critic based Improper Reinforcement Learning »
Mohammadi Zaki · Avi Mohan · Aditya Gopalan · Shie Mannor -
2022 Spotlight: Analysis of Stochastic Processes through Replay Buffers »
Shirli Di-Castro Shashua · Shie Mannor · Dotan Di Castro -
2022 Spotlight: Optimizing Tensor Network Contraction Using Reinforcement Learning »
Eli Meirom · Haggai Maron · Shie Mannor · Gal Chechik -
2022 Poster: Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2022 Spotlight: Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2021 : Invited Speaker: Shie Mannor: Lenient Regret »
Shie Mannor -
2021 : RL + Operations Research Panel »
Jim Dai · Fei Fang · Shie Mannor · Yuandong Tian · Zhiwei (Tony) Qin · Zongqing Lu -
2021 : RL + Robotics Panel »
George Konidaris · Jan Peters · Martin Riedmiller · Angela Schoellig · Rose Yu · Rupam Mahmood -
2021 Poster: Detecting Rewards Deterioration in Episodic Reinforcement Learning »
Ido Greenberg · Shie Mannor -
2021 Poster: Online Limited Memory Neural-Linear Bandits with Likelihood Matching »
Ofir Nabati · Tom Zahavy · Shie Mannor -
2021 Spotlight: Online Limited Memory Neural-Linear Bandits with Likelihood Matching »
Ofir Nabati · Tom Zahavy · Shie Mannor -
2021 Spotlight: Detecting Rewards Deterioration in Episodic Reinforcement Learning »
Ido Greenberg · Shie Mannor -
2021 Poster: Confidence-Budget Matching for Sequential Budgeted Learning »
Yonathan Efroni · Nadav Merlis · Aadirupa Saha · Shie Mannor -
2021 Spotlight: Confidence-Budget Matching for Sequential Budgeted Learning »
Yonathan Efroni · Nadav Merlis · Aadirupa Saha · Shie Mannor -
2021 Poster: Principled Exploration via Optimistic Bootstrapping and Backward Induction »
Chenjia Bai · Lingxiao Wang · Lei Han · Jianye Hao · Animesh Garg · Peng Liu · Zhaoran Wang -
2021 Spotlight: Principled Exploration via Optimistic Bootstrapping and Backward Induction »
Chenjia Bai · Lingxiao Wang · Lei Han · Jianye Hao · Animesh Garg · Peng Liu · Zhaoran Wang -
2021 Poster: Convex Regularization in Monte-Carlo Tree Search »
Tuan Q Dam · Carlo D'Eramo · Jan Peters · Joni Pajarinen -
2021 Spotlight: Convex Regularization in Monte-Carlo Tree Search »
Tuan Q Dam · Carlo D'Eramo · Jan Peters · Joni Pajarinen -
2021 Poster: Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning »
Anuj Mahajan · Mikayel Samvelyan · Lei Mao · Viktor Makoviychuk · Animesh Garg · Jean Kossaifi · Shimon Whiteson · Yuke Zhu · Anima Anandkumar -
2021 Poster: Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition »
Bo Liu · Qiang Liu · Peter Stone · Animesh Garg · Yuke Zhu · Anima Anandkumar -
2021 Poster: Controlling Graph Dynamics with Reinforcement Learning and Graph Neural Networks »
Eli Meirom · Haggai Maron · Shie Mannor · Gal Chechik -
2021 Spotlight: Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning »
Anuj Mahajan · Mikayel Samvelyan · Lei Mao · Viktor Makoviychuk · Animesh Garg · Jean Kossaifi · Shimon Whiteson · Yuke Zhu · Anima Anandkumar -
2021 Oral: Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition »
Bo Liu · Qiang Liu · Peter Stone · Animesh Garg · Yuke Zhu · Anima Anandkumar -
2021 Spotlight: Controlling Graph Dynamics with Reinforcement Learning and Graph Neural Networks »
Eli Meirom · Haggai Maron · Shie Mannor · Gal Chechik -
2020 Poster: Optimistic Policy Optimization with Bandit Feedback »
Lior Shani · Yonathan Efroni · Aviv Rosenberg · Shie Mannor -
2020 Poster: Topic Modeling via Full Dependence Mixtures »
Dan Fisher · Mark Kozdoba · Shie Mannor -
2020 Poster: Semi-Supervised StyleGAN for Disentanglement Learning »
Weili Nie · Tero Karras · Animesh Garg · Shoubhik Debnath · Anjul Patney · Ankit Patel · Anima Anandkumar -
2020 Poster: Angular Visual Hardness »
Beidi Chen · Weiyang Liu · Zhiding Yu · Jan Kautz · Anshumali Shrivastava · Animesh Garg · Anima Anandkumar -
2019 Poster: Projections for Approximate Policy Iteration Algorithms »
Riad Akrour · Joni Pajarinen · Jan Peters · Gerhard Neumann -
2019 Oral: Projections for Approximate Policy Iteration Algorithms »
Riad Akrour · Joni Pajarinen · Jan Peters · Gerhard Neumann -
2019 Poster: Exploration Conscious Reinforcement Learning Revisited »
Lior Shani · Yonathan Efroni · Shie Mannor -
2019 Poster: Action Robust Reinforcement Learning and Applications in Continuous Control »
Chen Tessler · Chen Tessler · Yonathan Efroni · Shie Mannor -
2019 Poster: The Natural Language of Actions »
Guy Tennenholtz · Shie Mannor -
2019 Oral: Exploration Conscious Reinforcement Learning Revisited »
Lior Shani · Yonathan Efroni · Shie Mannor -
2019 Oral: The Natural Language of Actions »
Guy Tennenholtz · Shie Mannor -
2019 Poster: Nonlinear Distributional Gradient Temporal-Difference Learning »
chao qu · Shie Mannor · Huan Xu -
2019 Oral: Action Robust Reinforcement Learning and Applications in Continuous Control »
Chen Tessler · Chen Tessler · Yonathan Efroni · Yonathan Efroni · Shie Mannor · Shie Mannor -
2019 Oral: Nonlinear Distributional Gradient Temporal-Difference Learning »
chao qu · Shie Mannor · Huan Xu -
2018 Poster: PIPPS: Flexible Model-Based Policy Search Robust to the Curse of Chaos »
Paavo Parmas · Carl E Rasmussen · Jan Peters · Kenji Doya -
2018 Poster: Beyond the One-Step Greedy Approach in Reinforcement Learning »
Yonathan Efroni · Gal Dalal · Bruno Scherrer · Shie Mannor -
2018 Oral: PIPPS: Flexible Model-Based Policy Search Robust to the Curse of Chaos »
Paavo Parmas · Carl E Rasmussen · Jan Peters · Kenji Doya -
2018 Oral: Beyond the One-Step Greedy Approach in Reinforcement Learning »
Yonathan Efroni · Gal Dalal · Bruno Scherrer · Shie Mannor -
2017 Workshop: Lifelong Learning: A Reinforcement Learning Approach »
Sarath Chandar · Balaraman Ravindran · Daniel J. Mankowitz · Shie Mannor · Tom Zahavy -
2017 Poster: Consistent On-Line Off-Policy Evaluation »
Assaf Hallak · Shie Mannor -
2017 Talk: Consistent On-Line Off-Policy Evaluation »
Assaf Hallak · Shie Mannor -
2017 Poster: End-to-End Differentiable Adversarial Imitation Learning »
Nir Baram · Oron Anschel · Itai Caspi · Shie Mannor -
2017 Poster: Multi-objective Bandits: Optimizing the Generalized Gini Index »
Robert Busa-Fekete · Balazs Szorenyi · Paul Weng · Shie Mannor -
2017 Talk: End-to-End Differentiable Adversarial Imitation Learning »
Nir Baram · Oron Anschel · Itai Caspi · Shie Mannor -
2017 Talk: Multi-objective Bandits: Optimizing the Generalized Gini Index »
Robert Busa-Fekete · Balazs Szorenyi · Paul Weng · Shie Mannor