Timezone: »

Bad-Policy Density: A Measure of Reinforcement-Learning Hardness
David Abel · Cameron Allen · Dilip Arumugam · D Ellis Hershkowitz · Michael L. Littman · Lawson Wong

Reinforcement learning is hard in general. Yet, in many specific environments, learning is easy. What makes learning easy in one environment, but difficult in another? We address this question by proposing a simple measure of reinforcement learning hardness called the bad-policy density. This quantity measures the fraction of the deterministic stationary policy space that is below a desired threshold in value. We prove that this simple quantity has many properties one would expect of a measure of learning hardness. Further, we prove it is NP-hard to compute the measure in general, but there are paths to polynomial-time approximation. We conclude by summarizing potential directions and uses for this measure.

Author Information

David Abel (DeepMind)
Cameron Allen (Brown University)
Dilip Arumugam (Stanford University)
D Ellis Hershkowitz (Carnegie Mellon University)
Michael L. Littman (Brown University)
Lawson Wong (Northeastern University)

More from the Same Authors