ANCHOR: Abductive Network Construction with Hierarchical Orchestration for Reliable Probability Inference in Large Language Models
Abstract
A central challenge in large-scale decision-making under incomplete information is estimating reliable probabilities. Recent approaches leverage Large Language Models (LLMs) to generate explanatory factors and elicit coarse-grained probability estimates. Typically, an LLM performs forward abduction to propose factors, each paired with two mutually exclusive attributes, and a Naïve Bayes model is trained over factor combinations to refine the final probabilities. However, the induced factor space is often sparse, leading to frequent ''unknown'' outcomes when the system cannot map a query context to any supported factor configuration. Simply expanding the factor set to increase coverage is ineffective: it amplifies statistical noise and introduces spurious correlations that violate the conditional-independence assumption, ultimately degrading stability and reliability.To address these limitations, we propose Anchor, an inference framework that orchestrates aggregated Bayesian inference over a hierarchically structured factor space. Anchor first constructs a dense and organized factor space via iterative generation and hierarchical clustering. It then performs context-aware mapping through hierarchical retrieval and refinement, substantially reducing ''unknown'' predictions. Finally, Anchor augments Naïve Bayes with a Causal Bayesian Network to capture latent dependencies among factors, relaxing the strict independence assumption. Experiments show that Anchor markedly reduces ''unknown'' predictions and produces more reliable probability estimates than direct LLM baselines, achieving state-of-the-art performance while significantly reducing time and token overhead.