Oral
Provably Efficient Maximum Entropy Exploration
Elad Hazan · Sham Kakade · Karan Singh · Abby Van Soest

Wed Jun 12th 04:35 -- 04:40 PM @ Room 104

Suppose an agent is in a (possibly unknown) Markov Decision Process in the absence of a reward signal, what might we hope that an agent can efficiently learn to do? One natural, intrinsically defined, objective problem is for the agent to learn a policy which induces a distribution over state space that is as uniform as possible, which can be measured in an entropic sense. We provide an efficient algorithm to construct such a maximum-entropy exploratory policy, when given access to a black box planning oracle (which is robust to function approximation). Furthermore, when restricted to the tabular setting where we have sample based access to the MDP, our proposed algorithm is provably efficient method, both in terms of sample size and computational complexity. Key to our algorithmic methodology is utilizing the conditional gradient method (a.k.a. the Frank-Wolfe algorithm) which utilizes an approximate MDP solver.

Author Information

Elad Hazan (Princeton University)
Sham Kakade (University of Washington)
Karan Singh (Princeton University)
Abby Van Soest (Princeton University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors