TimeSpot: Benchmarking Geo-Temporal Understanding in Vision–Language Models in Real-World Settings
Abstract
Geo-temporal understanding, the ability to infer location, time, and contextual properties from visual input alone, is a core aspect of human intelligence and underpins applications such as disaster management, traffic planning, embodied navigation, world modeling, and geography education. Although recent vision–language models (VLMs) have made progress in image geo-localization using salient cues like landmarks or road signs, their ability to reason about temporal signals and physically grounded spatial cues remains underexplored. To address this gap, we introduce TimeSpot, a benchmark for evaluating real-world geo-temporal reasoning in VLMs. TimeSpot consists of 1,455 ground-level images from 80 countries and requires structured prediction of temporal attributes (season, month, time of day, daylight phase) and geographic attributes (continent, country, climate zone, environment type, latitude-longitude) directly from visual evidence. The benchmark further includes spatial–temporal reasoning tasks that probe physical plausibility and cue integration under real-world uncertainty. Evaluations of state-of-the-art open- and closed-source VLMs show consistently low performance, particularly for temporal inference, and while supervised fine-tuning yields measurable gains, it remains insufficient, highlighting the need for new approaches to achieve robust, physically grounded geo-temporal understanding. By jointly evaluating spatial and temporal inference with diagnostic rigor, TimeSpot provides a principled framework for assessing physically grounded, real-world geo-temporal reasoning. We will release TimeSpot upon acceptance.