Recommendation platforms---such as Amazon, Netflix, and Facebook---use various strategies in order to engage and retain users, from tracking their data to showing addictive content. These measures are meant to improve performance, but they can also erode their users' trust. In this work, we study the role of trust in recommendation. We show that, because recommendation platforms rely on users for data, trust is key to every platform's success. Our main contribution is a game-theoretic view of recommender systems and a corresponding formalization of trust. More precisely, if a user trusts their recommendation platform, then their optimal long-term strategy is to act greedily---and thus report their preferences truthfully---at all times. Our definition reflects the intuition that trust arises when the incentives of the user and platform are sufficiently aligned. To illustrate the implications of this definition, we explore two simple examples of trust. We show that distrust can hurt the platform and that trust can be beneficial for both the user and platform.