FUSE: Quantifying Uncertainty in Multimodal LLMs by Bayesian Fusing Epistemic and Aleatoric Uncertainty
Abstract
Multimodal large language models (MLLMs) are playing an increasingly important role across multiple domains. In many applications, such as robotics, it is crucial to quantify the uncertainty in the output of these models. } We develop Fused Uncertainty with Semantic Evidence (FUSE), a probabilistic framework for capturing two complementary sources of uncertainty in multimodal language modeling: (i) aleatoric embedding-level uncertainty derived from input data vision-language ambiguity, and (ii) epistemic model-level uncertainty estimated from the semantic response diversity of MLLMs. Our approach formulates a Bayesian fusion mechanism that analytically combines these uncertainty sources to produce a scalar measure of uncertainty. This measure serves as a novel uncertainty representation for downstream applications of MLLMs and provides a principled foundation for uncertainty calibration in multimodal systems, improving reliability and downstream performance in MLLM-based reasoning and vision-language tasks. We demonstrate that our method outperforms baselines in providing uncertainty estimates and achieves state-of-the-art uncertainty calibration.