Sufficiency is Relative: Evaluating LLM Explanations under Model-Induced Input Distributions
Abstract
Large language models (LLMs) are increasingly deployed in high-stakes domains, where free-text explanations such as chain-of-thought and post-hoc rationales are used to justify model outputs. Yet it remains unclear whether these explanations are sufficient, i.e., if they contain enough information to explain the model’s output-generating process. We generalize classical sufficiency from feature attributions to arbitrary explanations and prove that explanation sufficiency is inherently relative to an input distribution, which must be explicitly defined for LLM explanations. We propose using the LLM itself to generate alternative inputs conditioned on an explanation, capturing its beliefs about possible inputs. We formalize self-consistent sufficiency as a goal for free-text explanations and introduce an information-theoretic metric, SCSuff, that enables evaluation of free-text explanations without relying on predefined biases or shortcuts. Our experiments show that SCSuff aligns with targeted perturbation tests where applicable and demonstrate that explanation sufficiency can vary with the input distribution. We further find that SCSuff is uncorrelated with model size, accuracy, or uncertainty, suggesting that improving self-consistent sufficiency requires approaches beyond scaling or standard performance optimization.