Position: Assistive AI requires Personalized Specialists, not Generalists
Abstract
The AI community is rapidly converging on generalist foundation models trained on web-scale data. While this paradigm has yielded impressive gains, in this paper we argue that this objective needs to shift for enabling AI that reliably helps people in their daily activities. The most valuable systems will not be those that attempt do everything for everyone, but those that do the right things for a specific individual over a long period of time. We take the position that \emph{specialist models}---defined not by narrow task taxonomies, but by a tight coupling to an individual user and their local environment---represent the endgame for high-impact assistive AI. We substantiate this argument through three case studies: (i) AI agents that need to help humans automate daily web activities; (ii) wearable assistants that must predict actions in-context from continuous egocentric streams; and (iii) home robots that require helping humans in daily tasks with safety and compliance guarantees. In these settings, standard scaling assumptions are inverted: the most critical data is generated \emph{after deployment} as a streaming, on-policy interaction trace. We outline research directions for building specialists that learn from organic observational data, avoid self-reinforcing errors, and improve safely over long horizons.