Position: Model identity in machine learning is a convention, not a property
Abstract
Treating the outcome of machine learning as a stable, identifiable artifact is implicit in language, tooling, and governance. This position paper examines whether a trained system admits context-appropriate criteria of identity. We show that neither functional behavior nor internal structure suffices: behavioral equivalence is underdetermined by finite data, while modern architectures admit multiple, structurally distinct realizations of the same function. Consequently, practices that treat learned systems as stable objects presuppose equivalence relations that are rarely made explicit. We do not propose abandoning such practices. Instead, we articulate the minimal conditions under which identity claims grounded in behavior, structure, or training process can be meaningfully interpreted, with implications for reproducibility, traceability, and governance.