Learning to Search and Searching to Learn for Generalization in Planning
Michael Aichmüller ⋅ Yannik Hesse ⋅ Hector Geffner
Abstract
Combinatorial generalization remains a central challenge in deep reinforcement learning (DRL). Classical planning provides a simple yet challenging setting to study this problem through explicit relational descriptions, without requiring learning from perception. In sparse-reward domains, standard RL exploration via real-time search is ineffective, and learning-based planning methods often rely on expert demonstrations, hindsight relabeling, or random walks from the goal state. In contrast, planners rely on best-first search methods such as $\mathrm{A}^\star$ to solve problems from scratch. We propose a self-improving $\mathrm{A}^\star$ learning framework in combination with a value heuristic represented by a Relational Graph Neural Network: the heuristic guides search, and the resulting search data updates the heuristic via $Q$-Learning. This loop yields heuristics that can function as general policies and solve new instances even without search, where DRL otherwise fails, as we show on puzzles such as Sokoban, PushWorld, The Witness, and the International Planning Competition 2023 benchmarks. Notably, we demonstrate strong zero-shot generalization: heuristics trained on Blocksworld instances with fewer than 30 blocks successfully solve instances with 488 blocks.
Successful Page Load