Poster
in
Workshop: Assessing World Models: Methods and Metrics for Evaluating Understanding
What if Othello-Playing Language Models Could See?
Xinyi Chen · Yifei Yuan · Jiaang Li · Serge Belongie · Maarten de Rijke · Anders Søgaard
Keywords: [ Multimodal ] [ representation learning ] [ learning efficiency ] [ world model ]
Language models are often said to face a symbol grounding problem. While some argue that world understanding can emerge from text alone, others suggest grounded learning is more efficient. We explore this through Othello, where the board state defines a simplified, rule-based world. Building on prior work, we introduce \ours, a multi-modal model trained on move histories and board images. Using next-move prediction, we compare it to mono-modal baselines and test robustness to semantically irrelevant perturbations. We find that multi-modal training improves both performance and the robustness of internal representations. These results suggest that grounding language in visual input helps models infer structured world representations.