From Prompts to Tokens: Internalizing Causal Supervision in Vision-Language Model for Multi-Image Causal Reasoning
Haoping Yu ⋅ Yuanxi Li ⋅ Jing Ma
Abstract
Visual causal reasoning is essential for understanding and intervening in the physical world, requiring identification of causal variables from visual inputs and reasoning over intervention effects. Despite recent progress, large vision-language models (VLMs) remain brittle at such tasks, especially for interventional and counterfactual queries over multi-image inputs. Most existing explorations inject causal knowledge via textual prompts, leaving causal mechanisms external to model execution and limiting reliable control during inference. To address this problem, we propose BridgeVLM, which internalizes visual causal reasoning by inducing a causal graph from multi-image inputs and converting it into structured Causal Tokens executed by RAMP layers injected into the LLM decoder for causal message passing. We further introduce a unified training interface M3S for fine-grained causal supervision from different granularities (local/global level). BridgeVLM achieves 54.4\% accuracy on intervention tasks on CausalVLBench (vs. 33.2\% with prompt-level supervision), improves results on Causal3D from 43.6\% to 49.0\%, and substantially improves causal structure learning on CausalVLBench ($F_1$: 33.4\% $\rightarrow$ 75.1\%).
Successful Page Load