Position: Responsible Practices and Model Performance are Not Competing Goals
Abstract
Many failures of deployed machine learning systems stem not from insufficient accuracy, but from neglecting responsibility as a core design requirement. While responsibility principles are widely studied, they are often treated as post-hoc checks rather than as integral factors of system design. This framework has reinforced the perception that responsible practices inherently trade-off with model performance. In this position paper, we challenge that assumption and argue that responsibility and performance are not inherently at odds. We adopt a lifecycle-oriented perspective, identifying which responsible AI principles are most critical at each stage, from problem formulation and data curation to training, deployment, and monitoring. Drawing on real-world instances, we show how misaligned choices at specific stages can compound downstream risks and how alternative design choices could have mitigated these failures. We argue that responsible AI should be understood as a system design challenge rather than a constraint, and we offer operational guidance for integrating responsibility into mainstream machine learning workflows in a way that supports, rather than undermines, real-world performance.