Position: Early-Stage Quality Assurance in Annotation Pipelines Is More Cost-Effective Than Late-Stage Validation
Abstract
This position paper argues that the machine learning community should prioritize early-stage quality assurance in annotation pipelines over the prevailing practice of late-stage validation. Data quality bottlenecks increasingly limit foundation model improvement, yet quality assurance research focuses almost exclusively on validation methods rather than validation timing. When validation occurs—not merely what validation methods are employed—fundamentally determines both error rates and annotation costs. This temporal neglect is puzzling given the well-established "shift-left" principle from software engineering, where empirical studies demonstrate 4–100× cost multipliers for defects detected in later development stages (Boehm, 1981; Shull et al., 2002). Annotation pipelines, we argue, exhibit analogous dynamics: errors caught before annotation begins cost a fraction of those discovered after review cycles complete. We propose a taxonomy of three QA trigger points—pre-annotation (T₀), post-annotation (T₁), and post-review (T₂)—that decompose annotation workflows into discrete validation opportunities. A survey of 47 recent papers reveals that only 4% report when validation occurs, a striking gap given timing's demonstrated impact in adjacent fields. Without explicit attention to QA timing, the community risks optimizing validation methods while ignoring the structural variable that may matter most. We call on researchers to report QA timing configurations, on platform developers to expose timing as a first-class parameter, and on the community to conduct controlled experiments testing whether the shift-left principle transfers to annotation contexts.