OpenIKLR: Bridging the Reasoning Gap in Open-World Scenarios via Iterative Premise Completion
Abstract
Large Language Models (LLMs) demonstrate remarkable performance across various natural language processing tasks but struggle with complex logical reasoning, particularly in real-world settings. Existing research is largely confined to the closed-world assumption, which posits that all premises required for reasoning are explicitly provided. However, real-world tasks frequently exhibit open-world characteristics, where the provided information is insufficient to infer a conclusion due to missing premises or implicit commonsense knowledge. To address this limitation, we propose OpenIKLR, an Open-world Incomplete-Knowledge-aware Logical Reasoning framework that integrates symbolic logic solvers with LLMs. OpenIKLR first translates natural language into symbolic representations to precisely pinpoint reasoning gaps via a logical solver. It then iteratively generate a minimal set of necessary missing premises using LLMs. To ensure these additional premises are both logically sound and factually accurate, we introduce a dual-verification process: logic verification via the solver and fact verification via the LLMs. Extensive experiments demonstrate that OpenIKLR consistently outperforms existing logical reasoning and RAG baselines across multiple backbones and real-world datasets, highlighting its efficacy in handling incomplete information. The code is available at https://anonymous.4open.science/r/ICML26_22398-B5BF/.