Position: Quantum Program Generation Must Prioritize Validity Over Probabilistic Scaling
Abstract
The scaling hypothesis assumes that increasing model parameters yields emergent reasoning capabilities. This position paper argues that applying this probabilistic paradigm to generic quantum circuit synthesis is a category error. Unlike natural languages, quantum circuits require strict adherence to mathematical constraints, such as unitarity. Training on unverified code constitutes data poisoning. Models learn syntax but fail to capture the physical semantics of Hilbert space. Since the valid subset of circuit designs decays exponentially with the number of qubits, post-hoc filtering is mathematically intractable. We propose a pivot from human-centric copilots to verifier-centric agents. We integrate hierarchical constraints, topological masks, and symbolic proxies directly into generation. Our analysis suggests that scale alone cannot bridge the validity gap. Verification-aware architectures offer a viable path for modular quantum program generation. The community must stop simulating the physicist and instead satisfy the physical rules.