Position: Reframing Hallucination: Latent Space Geodesics as a Pathway for Generative Discovery
Abstract
Current evaluation paradigms for generative models rely heavily on retrieval-based metrics such as exact match accuracy, creating a bottleneck particularly in domains requiring scientific discovery and creative reasoning. These metrics penalize any deviation from the training distribution, treating all non-factual outputs as errors. This position paper argues that rigidly minimizing these deviations induces a form of epistemic mode collapse that suppresses the stochastic exploration required for innovation. We propose the Higher-Dimensional Cognitive Hypothesis (HDCH), positing that valuable hallucinations represent geodesic traversals in a high-dimensional latent space that appear as errors only when projected onto the lower-dimensional manifold of established knowledge. We introduce a formal distinction between Type I (factually inconsistent noise) and Type II (factually novel but structurally coherent) exploratory hypotheses based on information geometry. Through experiments, we demonstrate that maximizing discovery requires calibrated instability, peaking at a critical thermodynamic phase transition. Furthermore, we advocate for an evaluation framework that optimizes an Exploratory Signal-to-Noise Ratio (ESNR), balancing the novelty of outputs against their structural plausibility. We conclude that evolving evaluation from validating static retrieval to incentivizing calibrated latent exploration is essential to unlock the full, discovery-oriented potential of generative AI.