Hallucination Detection from Structural Reasoning Model
Abstract
Hallucinations pose a key challenge for large language models. Chain-of-Thought prompting exposes intermediate reasoning, but reasoning traces are treated as linear traces, making it hard to capture cross-step dependencies and localize unsupported intermediate claims. We propose a \emph{structural reasoning model} to describe the interactions among local steps. To detect hallucinations, we extract a reasoning directed acyclic graph over conditions and intermediate claims, verify each claim against its parent nodes, and aggregate the step signals with a simple mass-flow rule. Under a probabilistic model, we provide an information-theoretic interpretation of this aggregation as measuring information loss along the reasoning graph. Experiments on GSM8K and MATH across multiple model families show that the proposed method improves detection performance over recent sampling-based baselines and judge-based methods. These findings provide a new perspective on the evaluation of chain-of-thought outputs and confirm the advantages of structural reasoning in hallucination detection.