Position: Machine Learning Research Should Be Guided by Explicit, Pluralistic Models of Human Purpose
Abstract
Machine learning systems increasingly shape attention, work, education, and social life, yet ML research often treats the question "what is this for?" as external, relying on proxies such as accuracy, engagement, or preference satisfaction. This position paper argues that ML research should be guided by explicit, pluralistic models of human purpose, understood as supporting people's capacity to pursue meaningful, self-chosen life projects with agency. The paper proposes three community practices: (i) purpose articulation, a structured "Purpose Statement" that specifies intended beneficiaries, mechanisms, and falsifiable failure modes; (ii) purpose evaluation, which measures impacts on agency and meaning alongside task performance and harm; and (iii) purpose governance, which updates purpose frameworks through transparent, participatory processes to reduce unaccountable value-setting. This framing enables concrete technical research directions, including objective design beyond preference satisfaction, benchmarks for agency and meaning, pluralistic system behavior, and institution-aware alignment. The paper provides stakeholder-differentiated recommendations for researchers, benchmark creators, conference organizers, and funders, and addresses credible objections including value neutrality, feasibility and measurement validity, the claim that harm prevention is sufficient, and risks of ideological capture or paternalism.