Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICML 2024 Workshop on Foundation Models in the Wild

In-Context Learning Improves Compositional Understanding of Vision-Language Models

Matteo Nulli · Anesa Ibrahimi · Avik Pal · Hoshe Lee · Ivona Najdenkoska

Keywords: [ Compositional Understanding ] [ In-Context Learning ] [ Vision language models ]


Abstract:

Vision-Language Models (VLMs) have shown remarkable capabilities in a large number of downstream tasks. Nonetheless, compositional image understanding remains a rather difficult task due to the object bias present in training data. In this work, we investigate the reasons for such a lack of capability by performing an extensive bench-marking of compositional understanding in VLMs. We compare contrastive models with generative ones and analyze their differences in architecture, pre-training data, and training tasks and losses.Furthermore, we leverage In-Context Learning (ICL) as a way to improve the ability of VLMs to performmore complex reasoning and understanding given an image. Our extensive experiments demonstrate that our proposed approach outperforms baseline models across multiple compositional understanding datasets.

Chat is not available.