Timezone: »

Actor-Critic based Improper Reinforcement Learning
Mohammadi Zaki · Avi Mohan · Aditya Gopalan · Shie Mannor

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #809
We consider an improper reinforcement learning setting where alearner is given $M$ base controllers for an unknown Markovdecision process, and wishes to combine them optimally to producea potentially new controller that can outperform each of the baseones. This can be useful in tuning across controllers, learnt possibly in mismatched or simulated environments, to obtain a good controller for a given target environment with relatively few trials. Towards this, we propose two algorithms: (1) a Policy Gradient-based approach; and (2) an algorithm that can switch between a simple Actor-Critic (AC) based scheme and a Natural Actor-Critic (NAC) scheme depending on the available information. Both algorithms operate over aclass of improper mixtures of the given controllers. For the first case, we derive convergence rate guarantees assuming access to a gradient oracle. For the AC-based approach we provide convergence rate guarantees to a stationary point in the basic AC case and to a global optimum in the NAC case. Numerical results on (i) the standard control theoretic benchmark of stabilizing an inverted pendulum; and (ii) a constrainedqueueing task show that our improper policy optimization algorithm can stabilize the system even when the base policies at its disposal are unstable.

Author Information

Mohammadi Zaki (Indian Institute of Science Bangalore)
Avi Mohan (Boston University)
Aditya Gopalan (Indian Institute of Science (IISc))
Shie Mannor (Technion)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors