Position: Verifiable Data Minimization is a Prerequisite for Responsible, Privacy-Preserving Industrial Vision
Abstract
The adoption of computer vision to drive industrial efficiency and safety creates a persistent tension between operational utility and worker surveillance. Current privacy measures, such as post-hoc blurring, are fundamentally flawed: they depend on the error-prone detection of sensitive attributes and treat privacy as a subtractive process. We posit that industrial computer vision must shift from "hiding secrets'' to verifiable data minimization. We advocate for a design paradigm of architecturally constrained inference, formalized through information-theoretic principles, where the sensing pipeline is optimized to capture only the features necessary for a specific task (e.g., pose estimation). This provably constrains the information available for unauthorized inferences (e.g., identification), decoupling privacy from detection accuracy and reducing reliance on sensitive attribute supervision. We outline an implementation path using modular edge processing and trusted execution environments to enable verifiable, hardware-rooted attestations of task-bound processing, and argue that verifiable purpose limitation should be a prerequisite for responsible industrial AI.