Topological Active Inference for Task Disambiguation
Yangbo Wei ⋅ Zhen Huang ⋅ Shaoqiang Lu ⋅ Junhong Qian ⋅ Chen Wu ⋅ Lei He
Abstract
In open-ended domains, natural language instructions are often *underspecified*, mapping to multiple valid yet functionally distinct latent intents. While Large Language Models (LLMs) excel at generation, their ability to resolve such *task ambiguity* through interaction is currently hampered by *semantic blindness*—a tendency to squander interaction budgets on distinguishing trivial syntactic variants rather than fundamental intent differences. To address this, we propose *Topological Active Inference (TAI)*, a geometric framework that recasts disambiguation as a process of *intent-manifold contraction*. TAI first leverages *Persistent Homology* to recover the topological skeleton of the solution space, theoretically guaranteeing the separation of semantic signal from syntactic noise. Subsequently, it synthesizes clarifying questions as *separating hyperplanes* designed to efficiently bisect the probability mass of the intent manifold. We introduce *Topological Expected Information Gain (TEIG)* for question selection and prove that maximizing TEIG reduces query complexity from linear $\mathcal{O}(N)$ to logarithmic $\mathcal{O}(\log K)$, where $K$ is the number of latent intents. Extensive experiments demonstrate that TAI recovers user intent with significantly fewer turns, achieving state-of-the-art disambiguation efficiency.
Successful Page Load