Timezone: »
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy that best fits observed sequences of states and actions implemented by an expert. Many algorithms for IRL have an inherent nested structure: the inner loop finds the optimal policy given parametrized rewards while the outer loop updates the estimates towards optimizing a measure of fit. For high dimensional environments such nested-loop structure entails a significant computational burden. To reduce the computational burden of a nested loop, novel methods such as SQIL [1] and IQ-Learn [2] emphasize policy estimation at the expense of reward estimation accuracy. However, without accurate estimated rewards, it is not possible to do counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. In this paper we develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show that the proposed algorithm provably converges to a stationary solution with a finite-time guarantee. If the reward is parameterized linearly, we show the identified solution corresponds to the solution of the maximum entropy IRL problem. Finally, by using robotics control problems in Mujoco and their transfer settings, we show that the proposed algorithm achieves superior performance compared with other IRL and imitation learning benchmarks.
Author Information
Siliang Zeng (University of Minnesota, Twin Cities)
Chenliang Li (The Chinese University of Hong Kong)
Alfredo Garcia (Texas A&M University)
Mingyi Hong (University of Minnesota)
More from the Same Authors
-
2021 : Understanding Clipped FedAvg: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Steven Wu · Mingyi Hong -
2023 : Robust Inverse Reinforcement Learning Through Bayesian Theory of Mind »
Ran Wei · Siliang Zeng · Chenliang Li · Alfredo Garcia · Anthony McDonald · Mingyi Hong -
2023 Poster: Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach »
Prashant Khanduri · Ioannis Tsaknakis · Yihua Zhang · Jia Liu · Sijia Liu · Jiawei Zhang · Mingyi Hong -
2023 Poster: Understanding Backdoor Attacks through the Adaptability Hypothesis »
Xun Xian · Ganghua Wang · Jayanth Srinivasa · Ashish Kundu · Xuan Bi · Mingyi Hong · Jie Ding -
2023 Poster: FedAvg Converges to Zero Training Loss Linearly for Overparameterized Multi-Layer Neural Networks »
Bingqing Song · Prashant Khanduri · xinwei zhang · Jinfeng Yi · Mingyi Hong -
2022 Poster: A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms »
xinwei zhang · Mingyi Hong · Sairaj Dhople · Nicola Elia -
2022 Spotlight: A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms »
xinwei zhang · Mingyi Hong · Sairaj Dhople · Nicola Elia -
2022 Poster: Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Mingyi Hong · Steven Wu · Jinfeng Yi -
2022 Poster: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Spotlight: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Spotlight: Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Mingyi Hong · Steven Wu · Jinfeng Yi -
2021 Spotlight: Decentralized Riemannian Gradient Descent on the Stiefel Manifold »
Shixiang Chen · Alfredo Garcia · Mingyi Hong · Shahin Shahrampour -
2021 Poster: Decentralized Riemannian Gradient Descent on the Stiefel Manifold »
Shixiang Chen · Alfredo Garcia · Mingyi Hong · Shahin Shahrampour -
2020 Poster: Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: Joint Gradient Estimation and Tracking »
Haoran Sun · Songtao Lu · Mingyi Hong -
2020 Poster: Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks »
Sijia Liu · Songtao Lu · Xiangyi Chen · Yao Feng · Kaidi Xu · Abdullah Al-Dujaili · Mingyi Hong · Una-May O'Reilly -
2019 Poster: PA-GD: On the Convergence of Perturbed Alternating Gradient Descent to Second-Order Stationary Points for Structured Nonconvex Optimization »
Songtao Lu · Mingyi Hong · Zhengdao Wang -
2019 Oral: PA-GD: On the Convergence of Perturbed Alternating Gradient Descent to Second-Order Stationary Points for Structured Nonconvex Optimization »
Songtao Lu · Mingyi Hong · Zhengdao Wang -
2018 Poster: Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks »
Mingyi Hong · Meisam Razaviyayn · Jason Lee -
2018 Oral: Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks »
Mingyi Hong · Meisam Razaviyayn · Jason Lee