Timezone: »
Reliable evaluation benchmarks designed for replicability and comprehensiveness have driven progress in machine learning. Due to the lack of a multilingual benchmark, however, vision-and-language research has mostly focused on English language tasks. To fill this gap, we introduce the Image-Grounded Language Understanding Evaluation benchmark. IGLUE brings together—by both aggregating pre-existing datasets and creating new ones—visual question answering, cross-modal retrieval, grounded reasoning, and grounded entailment tasks across 20 diverse languages. Our benchmark enables the evaluation of multilingual multimodal models for transfer learning, not only in a zero-shot setting, but also in newly defined few-shot learning setups. Based on the evaluation of the available state-of-the-art models, we find that translate-test transfer is superior to zero-shot transfer and that few-shot learning is hard to harness for many tasks. Moreover, downstream performance is partially explained by the amount of available unlabelled textual data for pretraining, and only weakly by the typological distance of target–source languages. We hope to encourage future research efforts in this area by releasing the benchmark to the community.
Author Information
Emanuele Bugliarello (University of Copenhagen)
Fangyu Liu (University of Cambridge)
Jonas Pfeiffer (TU-Darmstadt)
Siva Reddy (Mila)
Desmond Elliott (University of Copenhagen)
Edoardo Maria Ponti (Mila Montreal / University of Cambridge)
Ivan Vulić (University of Cambridge)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages »
Wed. Jul 20th 09:00 -- 09:05 PM Room Hall F
More from the Same Authors
-
2019 Workshop: The How2 Challenge: New Tasks for Vision & Language »
Florian Metze · Lucia Specia · Desmond Elliott · Loic Barrault · Ramon Sanabria · Shruti Palaskar