Agents tackling complex problems in open environments often benefit from the ability to construct knowledge. Learning to independently solve sub-tasks and form models of the world can help agents progress in solving challenging problems. In this talk, we draw attention to challenges that arise when evaluating an agent’s knowledge, specifically focusing on methods that express an agent’s knowledge as predictions. Using the General Value Function framework we highlight the distinction between useful knowledge and strict measures of accuracy. Having identified challenges in assessing an agent’s knowledge, we propose a possible evaluation approach that is compatible with large and open worlds.