Active Curriculum Refinement for Reinforcement Learning
Zhenya Liu ⋅ Yuxin Chen
Abstract
In many RL domains, environments are linked by prerequisite relations—e.g., difficulty-increasing edits or parameter increments—which induce a directed acyclic curriculum graph (DAG). In practice, this structure is often exploited only implicitly, yet it can yield clear gains in training. We introduce PATH, a curriculum learning framework that performs active learning on the curriculum graph. PATH first expands coverage by sampling diverse curriculum paths, then reallocates training toward regions that remain unmastered. Experiments show that PATH leverages the graph structure to achieve strong robustness and generalization across diverse environments.
Successful Page Load